Next Article in Journal
An Iterative Learning Scheme with Binary Classifier for Improved Event Detection in Surveillance Video
Previous Article in Journal
Blind Separation of the Measured Mixed Cyclostationary Waveforms in Transmission Lines of the PCB
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

An Effective Personality-Based Model for Short Text Sentiment Classification Using BiLSTM and Self-Attention

1
School of Computer and Software Engineering, Xihua University, Chengdu 610039, China
2
State Grid Suining Power Supply Company, Suining 629000, China
3
School of Architecture and Civil Engineering, Xihua University, Chengdu 610039, China
*
Author to whom correspondence should be addressed.
Electronics 2023, 12(15), 3274; https://doi.org/10.3390/electronics12153274
Submission received: 29 May 2023 / Revised: 19 July 2023 / Accepted: 25 July 2023 / Published: 30 July 2023
(This article belongs to the Section Artificial Intelligence)

Abstract

:
While user-generated textual content on social platforms such as Weibo provides valuable insights into public opinion and social trends, the influence of personality on sentiment expression has been largely overlooked in previous studies, especially in Chinese short texts. To bridge this gap, we propose the P-BiLSTM-SA model, which integrates personalities into sentiment classification by combining BiLSTM and self-attention mechanisms. We grouped Weibo texts based on personalities and constructed a personality lexicon using the Big Five theory and clustering algorithms. Separate sentiment classifiers were trained for each personality group using BiLSTM and self-attention, and their predictions were combined by ensemble learning. The performance of the P-BiLSTM-SA model was evaluated on the NLPCC2013 dataset and showed significant accuracy improvements. In particular, it achieved 82.88% accuracy on the NLPCC2013 dataset, a 7.51% improvement over the baseline BiLSTM-SA model. The results highlight the effectiveness of incorporating personality factors into sentiment classification of short texts.

1. Introduction

The increasing popularity of social media platforms such as Weibo has provided a huge amount of information that can be used for public opinion analysis and business promotion. Sentiment classification plays an important role in this field. However, due to the fact that social media platforms such as Weibo limit the number of characters in text, users often use concise and personalized words to post and express their emotions, which poses a challenge for sentiment classification and requires a more fine-grained method to address it.
Extracting sentiment information from short texts is a critical task in sentiment classification research. Commonly used techniques for sentiment classification include rule-based, machine learning-based, and deep learning-based methods. Rule-based approaches rely on sentiment lexicons or expert-generated features, but feature engineering can be tedious and expensive. Machine learning-based methods treat sentiment classification as a task similar to document or topic classification, and use machine learning algorithms to classify text according to its sentiment polarity. However, this approach faces challenges such as sparse feature vectors, dimensionality explosion, and difficulty in feature extraction. Deep learning-based methods construct vectorized representations of words in text, and then create sentence-level and document-level representations to learn deep semantic information from the text. Deep learning models include long short-term memory (LSTM) and convolutional neural network (CNN) and their variants. Among these models, bidirectional LSTM has shown promising results in capturing the contextual information of a text.
Psychological research has demonstrated that the way people write or speak is influenced by their personality, and has confirmed the relationship between personality and emotional expression, as well as word use. For example, extroverts, especially younger ones, tend to be more outgoing, express their emotions directly, and may use internet slang such as “HBD” and “LMAO” that is rarely used by people with other personalities. People with different personalities express themselves in different ways [1], suggesting that the accuracy of text sentiment classification can be improved by extracting emotional expression characteristics of different personality traits. The Big Five is relatively popular and widely used in academic personality research, so we choose the Big Five model as the standard for personality classification, which describes people on five dimensions including agreeableness, extraversion, conscientiousness, openness and neuroticism.
Using machine learning to extract general emotional features from text often fails to distinguish the personal characteristics of users, resulting in poor performance of sentiment classifiers. To accurately understand the semantic information in Weibo text, it is essential to consider the contextual relationship between the text before and after, as well as the long-range correlation between words. Although LSTM is able to capture longer semantic dependencies, it only captures forward semantic information and does not recognize backward semantic information. However, the bidirectional LSTM (BiLSTM) model, which involves both forward and backward LSTMs, can perceive the contextual information of the sentence. The self-attention mechanism focuses on the target to be emphasized, gives it more weight, extracts more detailed information about the target, and ignores irrelevant information. This mechanism is a type of attention mechanism that calculates attention on its own words, without considering the direct distance, and is able to fully consider the semantic and syntactic connections between sentences and words and capture the internal structure of the sentence. By combining these two models, it becomes possible to learn both the contextual information of the sentence and the deep-level emotional expression information of different personalities, thereby improving the sentiment classification effect of Weibo texts. Therefore, we propose a model called P-BiLSTM-SA, which combines BiLSTM and self-attention as well as personalities to achieve sentiment classification of Weibo texts. First, the texts are divided into groups based on personalities, and based on this, personality-based sentiment classifiers are trained for each group. Finally, the prediction result of each classifier is ensembled to output the final sentiment polarity. The contributions of this paper are as follows:
(1) Construct a lexicon of personality based on Chinese Weibo texts.
(2) Discuss the correlation between personalities and Weibo texts posted by users.
(3) Propose a model P-BiLSTM-SA for sentiment classification, which is used to train the personality-based classifiers and then ensemble the prediction of each classifier to output the final sentiment polarity.
The rest of this paper is organized as follows. The second part presents related work. The third part describes the methods and results. The fourth part presents the conclusion and future work.

2. Relate Works

2.1. Lexicon-Based Methods

A widely used tool for identifying individual personalities is the linguistic inquiry and word count (LIWC) method. LIWC searches for target words or stems from a variety of lexicons, classifies them into linguistic dimensions, and then converts the raw count into a percentage of total words. LIWC has found wide application in natural language processing and is particularly favored by researchers for exploring the relationship between emotional expression and personality [2,3]. Researchers have discovered significant correlations between certain LIWC categories and personality traits such as extraversion (e.g., personal pronouns), neuroticism (e.g., negative emotion words), and agreeableness (e.g., positive emotion words). These findings suggest that an individual’s personality traits can be reflected in the words they use. In China, a Chinese language psychoanalysis system called “TextMind” has been developed and is similar in function to LIWC. Cui et al. [2] utilized TextMind to study and analyze the expression of different personalities in Chinese language using microblog data. Salsabila et al. [4] conducted experimental research and concluded that LIWC, as a linguistic feature, can improve the performance of personality recognition. In addition, Schwartz et al. [5] collected information from 75,000 Facebook users through the My Personality application and analyzed the linguistic characteristics of these users based on their personalities. However, personality lexicons based on foreign texts may not be applicable to Chinese personality recognition.
Sentiment lexicons are employed to evaluate the sentiment polarity of texts based on predefined rules. As a result, lexicon-based sentiment classification methods rely heavily on the quality of the lexicon and the evaluation rules. These aspects are often based on human experience or prior knowledge, resulting in high labor costs [6]. Currently available Chinese sentiment lexicons, such as HowNet, the sentiment vocabulary ontology database from Dalian University of Technology, and TUSD, have limitations in terms of coverage and adaptability to different domains, time periods, and language environments. This is mainly due to the emergence of new words on the internet that carry rich emotional information but are not included in existing sentiment lexicons. To address this issue, researchers [7] have proposed sentiment word extraction methods based on the distribution of parts of speech and the co-occurrence of emotion words in microblog data. In addition, sentiment lexicons tailored to specific domains, such as photography [8], e-commerce [9] and travel [10,11], have been constructed and shown to outperform generic sentiment lexicons. However, sentiment classification based solely on sentiment lexicons lacks flexibility and struggles with different parts of speech and meanings.

2.2. Machine Learning-Based Methods

Machine learning methods typically preprocess text data to remove irrelevant information. This includes standardizing the text through techniques such as removing stop words and punctuation. After preprocessing, feature extraction methods such as term frequency-inverse document frequency (TF-IDF) and N-grams are used to represent the text in terms of numerical features, which are then fed into machine learning classifiers [12,13]. Various classifiers, including support vector machines, naive Bayes, logistic regression, random forest, and decision trees, can be used to solve problems such as text sentiment categorization and personality recognition [14,15]. The machine learning classifiers are trained on the extracted numerical features and used for text classification [13].
Arion et al. [14] have attempted to detect users’ personality traits from their social media posts using random forest (RF), K-nearest neighbor (KNN), and support vector machine (SVM) classifiers. Wei et al. [15] used the bag-of-words method to represent Chinese words associated with a user’s personality. They combined it with the K-means algorithm to cluster these words, and then used the number of items in each cluster as the textual representation of users for personality recognition. Similarly, Pierre et al. [16] extracted textual features using both linguistic inquiry and word count (LIWC) and bag-of-words methods. Gaussian processes combined with bag-of-words were found to be less effective than ridge regression.
Sentiment analysis uses machine learning models to analyze text data and classify it as positive, negative, or neutral. Numerous studies have been conducted on sentiment analysis using various machine learning models and techniques. For example, Saad et al. [17] used six different models to analyze Twitter data from U.S. airlines and found that support vector machine (SVM) achieved the highest accuracy. Alzyout et al. [18] studied violence against women using SVM and achieved an accuracy of 78.25%. Jemai et al. [19] developed a sentiment analyzer using five different models and found that naive Bayes performed best with an accuracy of 99.73%. Other methods such as conditional random field (CRF) [20], approached decoding algorithm [21,22], gradient descent and random forest [23] have also been proposed to effectively extract sentiment features and achieve sentiment classification.
These methods tend to have higher classification accuracy, improved scalability and repeatability. However, they rely on the quality of the corpus and the subjective labeling of the data, which can affect the classification results. In recent years, deep learning methods have also shown promise in sentiment analysis, particularly in capturing the complex relationships between words in a sentence.

2.3. Deep Learning-Based Methods

Deep learning-based approaches primarily use word embedding methods to represent words in text and then construct semantic representations at the sentence or document level. These deep learning models are used to extract and learn sentiment features from the text to enable classification [24,25]. Deep learning methods excel in natural language processing (NLP) compared to traditional machine learning methods because they do not require lexicon building or grammatical analysis [26,27,28,29]. With a sufficiently large training dataset, deep learning models can be trained to achieve high classification accuracy and generalization ability [30,31,32], making them increasingly popular in NLP. Arbane et al. [33] proposed a model based on BiLSTM for sentiment classification and public opinion analysis of COVID-19 on Twitter and Reddit. Their results highlight the importance of using NLP techniques to analyze public opinion in the context of public health issues.
Hernandez and Knight [34] attempted to create a classifier for sorting social media posts into Myers–Briggs Type Index (MBTI) personality types. They used models such as gated recurrent unit (GRU), simple recurrent neural network (RNN), long short-term memory (LSTM), and bidirectional LSTM (BiLSTM), and achieved an overall accuracy of 0.028. Zhou et al. [35] constructed two attention-based BiLSTM architectures that incorporated both emoji and textual information at different semantic levels for personality recognition tasks. Their models achieved state-of-the-art performance over baseline models on a real dataset. While deep learning has shown superior performance in personality recognition, collecting data that captures diverse personalities remains a challenge.
Li et al. [36] constructed multi-channel features and employed self-attention and BiLSTM to capture the relationship between sentiment target words and words with sentiment polarity words in sentences. Sadir et al. [37] proposed a convolutional neural network model (ACNN-TL) based on the attention mechanism and transfer learning. They obtain the semantic representation of words using Word2Vec and BERT as pre-training models. Kamab et al. [38] proposed a convolutional neural network model based on the attention mechanism and combined with BiLSTM. Experimental results show that this method outperforms sentiment analysis. Compared to classification methods based on sentiment lexicons and traditional machine learning, the deep learning approach offers better expressiveness and model generalization ability, but requires a large amount of training data.
Sentiment analysis on social media platforms such as Weibo poses significant challenges due to the limited word count of texts. Many scholars [39,40] have developed algorithms to improve the accuracy of sentiment classification. For example, Jin et al. [41] combined emoji and text sentiment features, utilized CNN to capture local features, and trained a sentiment classifier. Chen et al. [42] employed a convolutional self-encoder to obtain image features of emoji and combined them with the feature vector of Weibo texts to achieve Weibo sentiment classification using a multilayer perceptron. Recognizing the need to incorporate deep learning models and sentiment symbols into existing Weibo text sentiment analysis, Zhang et al. [43] proposed a dual attention model approach to construct a Weibo sentiment symbol library containing sentiment words, negation words, degree adverbs, network words, and Weibo emoticons. The authors demonstrated that the combination of attention models and sentiment symbols effectively improves the ability to capture Weibo sentiment semantics. Another approach by Wang et al. [44] was to develop a Weibo user interest lexicon to calculate sentiment scores and output sentiment results. The authors trained a general classification model using LSTM and employed SVM to ensemble the prediction results of both models to obtain the final sentiment status.
It is worth noting that most current sentiment classification research tends to overlook the influence of personality, especially in Chinese texts. A person’s thought patterns are shaped by their behavior, emotions, psychology, and motivations, collectively known as personality, which strongly influence individual behavior. Language usage patterns in online social media, such as words, phrases, and topics, provide insights into personality traits. In English, extroverted individuals tend to mention social-related vocabulary such as “party” and “love you”, while introverted individuals may use words that reflect solitary activities such as “internet” and “computer” [5]. Similarly, in Chinese, extroverted individuals are more likely to use numerous personal pronouns, indicating their tendency to pay more attention to others [3]. User-generated text can effectively reflect their mental activity and personality traits. For example, individuals high in extraversion often use more words associated with positive emotions, while those high in neuroticism use more words associated with negative emotions [45]. In other words, individuals with similar personality traits show comparable expressions. Leveraging this understanding can improve sentiment classification performance to some extent. Therefore, we chose BiLSTM and self-attention to train classifiers. BiLSTM captures contextual information and focuses on different personality traits, and the prediction results from these classifiers are merged to produce the final predictions.

3. Methods and Results

3.1. Personality Recognition

To explain the differences in personalities, we first construct a personality lexicon specifically for Chinese Weibo texts. Although LIWC is a reliable psychological lexicon, it is designed for English-speaking countries. Due to cultural differences between China and Western countries, people from different regions have different ways of expressing themselves. As a result, the effectiveness of personality identification on Chinese social platforms is limited. In our study, we develop a personality lexicon specifically tailored to Chinese Weibo texts to accurately recognize users’ personalities. In addition, machine learning algorithms are employed to assess the degree or status of users’ performance on various personality dimensions.
To achieve personality recognition, we first assess whether users exhibit certain personality traits using the constructed personality lexicon. If a user does not exhibit any of the particular personality traits, we conclude that they do not belong to any particular personality category. On the other hand, if a user’s Weibo texts indicate the presence of a specific personality trait, we create a new dataset containing those texts. This dataset is then divided into a training set and a test set in a 7:3 ratio. The training set is used to train a classifier specific to that personality trait using machine learning methods, and the test set is used to assess the user’s state or degree of alignment with that particular personality trait.

3.1.1. Dataset Preparation

In this paper, we use the BFI-44 questionnaire [46], which consists of a total of 44 questions assessing different personality dimensions, with 8 or 9 questions assigned to each dimension. Over a period of more than 2 months, we collected a total of 457 questionnaires, of which 379 were considered valid. There were 101 male and 268 female participants. The majority of participants were young, with an average age of 23 years. Specifically, 15.3% were between the ages of 18–20, 72.9% were between the ages of 21–24, and 11.8% were between the ages of 25–30. Each question in the questionnaire was scored on a Likert scale, resulting in scores ranging from 1 to 5. To distinguish users with significant personality expression, the questionnaire is processed by μ ± 0.5 σ converting the personality scores. Users with scores above μ + 0.5 σ are high trait users (H), users with scores below μ 0.5 σ are low trait users (L), and users with scores in the rest of the range are those with insignificant personality expressions (M). Table 1 shows the percentages for the five personality dimensions, namely agreeableness (A), extraversion (E), conscientiousness (C), openness (O) and neuroticism (N).
For our study, the data collection process involved obtaining participant IDs from the completed questionnaires of 379 participants. Using these IDs, we crawled the participants’ published texts, resulting in a comprehensive personality corpus. In addition, we randomly crawled about 1.3 G of texts to generate another corpus. This corpus will be used to train Word2vec for subsequent keyword clustering and expansion.

3.1.2. Personality Lexicon Construction

Building a personality lexicon involves three main steps: extracting personality keywords, embedding the keywords, and building the lexicon. First, relevant keywords related to personality traits are extracted from users’ Weibo texts. Then, Word2vec is applied to the corpus to generate vector representations of the extracted keywords. Finally, a machine learning clustering algorithm is used to group the keywords into several clusters, each of which is analyzed and given a semantic name. The keywords extracted from each cluster are then expanded to create a personality lexicon containing words from different semantic categories. The detailed process of building the personality lexicon is illustrated in Figure 1.
(1)
Keyword extraction
In information retrieval, term frequency-inverse document frequency (TF-IDF) is the most widely used method that reflects the importance of a term in a corpus of documents. It assigns weights to each term in a document based on its term frequency and inverse document frequency. Terms with higher weights are considered more important. Therefore, our research uses the TF-IDF method to calculate the weight of each word in a user’s Weibo texts, and extracts a certain number of high-weight words that represent the characteristics of the user’s textual content. However, not all of the extracted keywords are related to personality characteristics. In order to select only the relevant keywords, we combine the results of the user questionnaire with the Chi-square test (CHI).
(2)
Lexicon construction
After obtaining the keywords for different personalities using TF-IDF + CHI, the K-means clustering algorithm is applied to group similar semantically related words using Word2vec to analyze the expression differences of personalities on Weibo. Before clustering, the appropriate number of clusters (k) needs to be determined. The elbow method is used to experiment with k values between 10 and 30, as shown in Figure 2, which indicates that k = 18 produces the best clustering results. Therefore, 18 is chosen as the final number of clusters, and the results of the K-means clustering are shown in Figure 3.
According to the semantic characteristics of each category, we assign a name to each category. The partial clustering results of the keywords are listed in Appendix A of the article; these are words that are close to the cluster center and can effectively describe the overall characteristics of each category.
Categories 0 and 11 are related to comments, expressing attitudes toward people and things, including positive and negative evaluations; Category 1 is related to time; Category 2 is related to daily life; Category 3 is related to relationships; Category 4 is related to places; Category 5 is related to cognitive processes; Category 6 is related to blessings; Category 7 is related to platform activities, such as forwarding Weibo, sharing red envelopes, or other platform activities; Categories 8 and 15 describe a person’s emotional state, including positive and negative emotions; Category 9 is related to physical health, mainly describing body parts and health status; Category 10 is related to social events; Category 12 is related to work; Category 13 is related to values; Category 14 is related to school life; Category 16 is related to competition; and Category 17 is related to food.
Compared with SCLIWC, our personality lexicon is specifically designed for Chinese Weibo, with more specific categories, and includes some network slang and popular language, which is beneficial for predicting Weibo users’ personalities.

3.1.3. Correlation Analysis

In order to study the correlation between textual features of Weibo texts and users’ personality traits, keywords are extracted from a user’s Weibo texts, and the set of these keywords is used to represent the user’s textual content. Using the personality lexicon, the number of keywords in each semantic category in the user’s Weibo is calculated. A Pearson correlation analysis is then performed between the personality trait scores obtained from the questionnaire and the different semantic categories in the personality lexicon. The analysis results can be used to explain personality traits from a textual perspective and provide a basis for personality-based sentiment classification of Weibo texts.
Table 2 shows that agreeableness is negatively correlated with work and that high-agreeableness users are more likely to bless others. Extraversion is positively correlated with relationship terms, suggesting that users who are more extroverted value communication with others more. Neuroticism is positively correlated with both negative and positive emotions, suggesting that high neuroticism users are emotionally unstable. Openness is positively correlated with position, cognition, and values, suggesting that users high in openness are creative, imaginative, and exploratory. High conscientiousness is positively correlated with work and time, indicating that users with high conscientiousness have a strong sense of time and take their work seriously. Low agreeableness is negatively correlated with work, values, and so on.

3.1.4. Experiments and Results

According to the questionnaire, participants’ scores can be calculated in the five dimensions of agreeableness, extraversion, conscientiousness, openness and neuroticism. Based on the scores, they can be classified as high, low, or middle. Therefore, the personality lexicon can first be used to differentiate the five personality dimensions, and then machine learning can be used to classify the extent to which users exhibit different personality dimensions.
Since deep learning is a neural network-based algorithm, a large amount of Weibo user data is needed to train the model for better prediction performance. However, in this study, only 379 valid questionnaires were collected through a survey, which is a relatively small amount of data and does not meet the requirements for training deep learning models. Therefore, traditional machine learning algorithms, including support vector machines (SVM), random forests (RF), and naive Bayes (NB), were good choices for this study. These models were utilized to predict personality traits based on the constructed personality lexicon and the simplified Chinese version of the LIWC, the SCLIWC [47]. It should be noted that the LIWC contains over 70 word categories, not all of which are related to the Big Five personality traits. To address this, Qiu et al. [3] studied the correlation between the different categories of the LIWC and the Big Five personality traits, and our work was based on their research and the SCLIWC to conduct comparative experiments.
As shown in Table 1, nearly half of the participants scored in the middle range for each personality dimension, and the distribution of low and high trait users was relatively balanced. Therefore, a binary classification approach was used to identify personality. First, users were categorized as mid-trait or low–high for a personality dimension. Next, users with low–high were further classified as either low-trait or high-trait users. Two experiments were conducted to achieve personality recognition of Weibo users.
The first step in personality recognition was to distinguish whether a Weibo user was low or high in personality traits.
From Table 3, we can see that the average accuracy of user personality classification based on the personality lexicon constructed in this study was higher than that of SCLIWC. The accuracy of each personality trait was also higher than that of SCLIWC, indicating the effectiveness of the personality lexicon in identifying users’ personality traits. In particular, the accuracy of agreeableness and openness was relatively high, reaching 0.7281 and 0.7335, respectively. This may be because the number of users with these two personality traits was relatively large, resulting in a significant proportion of their posted Weibo messages in the dataset.
To analyze users’ performance on different personality dimensions, the Weibo texts and their corresponding users were used to construct datasets for each dimension. These datasets were then divided into training and test sets with a ratio of 7:3. Three classifiers were used to train the models for each dimension: support vector machine (SVM), random forest (RF), and naive Bayes (NB). Each classifier was trained separately for each personality dimension. The results of the experiments are shown in Table 4.
The personality lexicon outperformed the SCLIWC lexicon when combined with various machine learning methods, suggesting that the personality lexicon was a more effective tool for personality recognition. The combination of the lexicon with machine learning algorithms enabled the extraction of personality traits from the lexicon while taking into account the syntactic and semantic relationships of the text. Although SVM had a higher accuracy rate of 0.6514 in identifying extraversion, RF performed better overall, with average accuracy rates of 0.6826 and 0.6231 when combined with the two lexicons, indicating that RF is more compatible with lexicons, especially the personality lexicon developed in this study.

3.2. Sentiment Classification

Extracting sentiment information from text is an important goal in sentiment analysis. Weibo, a popular social media platform, allows users to share their experiences and engage in discussions. Analyzing Weibo posts can help identify sentiments and trends, which can help in understanding public opinion. Early detection of negative emotions can address psychological issues and enable intervention. In addition, Weibo provides valuable insights for companies to tailor their products and promotions to users’ preferences. Therefore, we propose the P-BiLSTM-SA model for sentiment analysis of Weibo texts, which is based on personality traits and combined with bidirectional long short-term memory (BiLSTM) and the self-attention mechanism. The overall structure is shown in Figure 4. First, texts belonging to users with similar personality traits were grouped together, since people with the same personality are likely to have similar expression patterns. The texts were then preprocessed, and word vectors were generated using Word2vec. These word vectors were then used to form a matrix that was fed into the BiLSTM layer. The resulting output was fed into the self-attention layer, which assigned weights to the features and extracted deep-level sentiment features. As a result, 10 sentiment classifiers and one general sentiment classifier were trained based on different personality traits. Finally, the prediction results of the classifiers were ensembled using an ensemble strategy method to output the sentiment polarity prediction. Here, H and L stand for high and low traits of each personality, such as HE for high extraversion and LE for low extraversion. “All” represents the general texts, i.e., all Weibo texts in the dataset.

3.2.1. P-BiLSTM-SA Model

The P-BiLSTM-SA model is based on personality and BiLSTM-SA, so we first needed to use the personality lexicon constructed to recognize users’ personalities according to the Big Five model, which was helpful to group the texts published by users with different personality traits into 10 groups, and each group reflected the linguistic expression characteristics of the corresponding personality trait, which facilitated the training of basic emotion classifiers for different personalities.
When building the classifiers, we chose BiLSTM to capture the contextual information of the texts, and the self-attention mechanism to assign weights based on the importance of words in the texts, learning the sentiment expression patterns of different personalities. To avoid overlooking the common expression characteristics of Weibo users, we also constructed a general text sentiment classifier, which was trained on all Weibo texts in the dataset. The sentiment classifiers were trained using the BiLSTM + self-attention mechanism, as shown in Figure 5.
The sentiment classifiers for both personality traits and general text were developed using an embedding layer, a BiLSTM layer, a self-attention layer, and a Softmax layer. The word vector matrix of the texts served as input for the BiLSTM layer. The outputs of the BiLSTM layer were then fed into the self-attention layer, which assigned weights to the features and extracted deep-level sentiment features to train the personality sentiment classifiers.
(1)
BiLSTM layer
To better understand the semantic information of words in Weibo texts, it was necessary to consider the contextual relationships between words as well as the long-term correlations between words. Although LSTM can capture long-distance semantic dependencies, conventional LSTM only captures forward semantic information while ignoring backward semantic information. However, the BiLSTM model can capture both forward and backward contextual information of a sentence. Therefore, we adopted the BiLSTM model to encode the semantic information of Weibo texts. For a given Weibo text with word embeddings: { v 1 , v 2 , , v t } , the output of BiLSTM is h = [ h 1 , h 2 , h n ] , where h R N × d , N is the length of the sentence, d is the size of the hidden layer.
(2)
Self-attention layer
The self-attention mechanism assigns weights to each output state ( h i ) of the BiLSTM, resulting in a sentence representation vector matrix. The matrix captures both contextual information and highlights various personality and emotional features of the Weibo text. The weighted feature representation of each word in the sentence is calculated as follows:
C = i = 1 N i h i ,
The importance of the i-th word in the whole Weibo text is represented by i , which is computed according to Formula (2):
i = s o f t max ( h i h T d k ) ,
To prevent the dot product h i h T from becoming too large, a scaling factor d k is introduced, which is typically set to the dimensionality of the input vectors.
(3)
Sentiment classification
The final layer of the model is a fully connected network layer that utilizes the Softmax function as its activation function to calculate the predicted probabilities of different emotion labels for the given Weibo text. Specifically, the output of the previous layer serves as the input and is linearly transformed using weights and bias terms. The Softmax function then converts this output into a probability distribution. The formula for this process is as follows:
p = s o f t max ( W C + b ) ,
Here, C represents the output vector of the previous layer, while W and b represent the weights and bias terms of the fully connected layer, respectively.
(4)
Ensemble of sentiment classifiers results
For a set of n test texts (t1, t2, L, tn), the 11 sentiment classifiers are used to make predictions about the texts. For a text ti, the prediction results ( p i j , p i j + ) are obtained, where ( p i j , p i j + ) represent the probability of the text being predicted as negative or positive, by the j-th classifier, respectively. Based on the outputs of the 11 classifiers, the probabilities for each group are summed and averaged to obtain p i = ( p i , p i + ) , which represents the final prediction results for the text ti. Specifically, the probabilities for each group are averaged using the formula p i = 1 11 j = 1 11 p i j . The specific fusion process is shown in Figure 6.
To better understand the P-BiLSTM-SA model, we provide an example. Once we had trained 10 personality-specific sentiment classifiers and one general sentiment classifier based on the personality dataset, the P-BiLSTM-SA model was established. In Figure 7, for a Weibo text example, “Thank you, a pregnant female colleague often delegates her work to me, claiming that it improves my job ability, what’s the problem Electronics 12 03274 i001”. We used the Youdao API to translate non-Chinese text and convert emoticons to text using Emojiswift, resulting in “Thank you, a pregnant female colleague often delegates her work to me, claiming it enhances my job capabilities [hehe], what’s the problem”. After a series of processing steps, the sentence was transformed into its final form: “Thank/pregnant/female/colleague/often/work/delegate/claim/enhance/job/capabilities/hehe/problem”. The sentence was then vectorized using word2vec and tokenized before being fed into each of the personality-specific sentiment classifiers. This yielded 11 predicted probabilities, which were finally calculated to yield the overall prediction result of [0.706387 0.284522], indicating a negative sentiment.

3.2.2. Experiments and Results

(1)
Data processing
The experimental data came from 733 users on Sina Weibo. After preprocessing and labeling, there were 71,961 original Weibo texts, of which 40,483 were positive and 31,478 were negative. The Weibo texts were divided into training, validation and test sets in the ratio of 7:2:1. In order to construct personality-based sentiment classifiers, the texts posted by the 733 users were first grouped according to their characteristics. The results are shown in Table 5.
The Weibo texts were full of noise due to irregular expression, so we needed to pre-process them. First, we removed unnecessary information such as videos, images, URLs, and special symbols such as “@” and “#” from Weibo content, while keeping text and emoticons. Then, we translated phrases in Weibo posts from non-Chinese languages to Chinese using Youdao’s translation API and converted emoticons into textual representations. Finally, we filtered out common but meaningless words from the cleaned data using a merged stop word list. After cleaning the data, the sentences were split into words using Jieba, a Chinese word cutting tool. Word2Vec was then used for tokenization. Since the texts varied in length, the sequences had different dimensions, which can be a challenge for deep learning models such as LSTM. To ensure consistent dimensions, the sequences were transformed into an embedding matrix using padding. In this study, a sequence length of 64, corresponding to the longest Weibo text in the datasets, was used for padding.
(2)
Parameter setting
The proposed model, P-BiLSTM-SA, has specific parameters as shown in Table 6. In the experiments, the word embeddings were set to 200 dimensions, Adam was chosen as the optimization function, and the loss function was the categorical cross-entropy.
(3)
Experiments and discussion
To validate the effectiveness of the basic sentiment classifiers for each personality trait, we conducted a comparison experiment between the basic sentiment classifiers and the P-BiLSTM-SA model, and the results are shown in Table 7. Obviously, the accuracy of a single basic sentiment classifier was lower than that of the P-BiLSTM-SA model, which shows that the personality-based sentiment classifiers can effectively capture the personality-specific emotional features expressed in texts, and thus, integrating the outputs of multiple classifiers can efficiently improve the accuracy of sentiment classification. Among the basic sentiment classifiers, except for the HC and LN classifiers, the accuracy of the other classifiers was higher than that of the universal classifier ALL, suggesting that incorporating personality factors into sentiment classification can enable the model to learn the specific emotional expression styles or preferences of different personalities, thus improving the accuracy of sentiment classification. The lower classification accuracy of the HC and LN classifiers may have been due to the fact that the text data for these two classes were less than those for the other classes, which limited the learning of text features by the model during training and affected the effectiveness of emotion classification.
To further investigate the impact of personality on sentiment classification results, we deliberately removed the basic sentiment classifier for a particular personality trait from the P-BiLSTM-SA model and compared it with the original model. The experimental results are shown in Table 8. It can be observed that the P-BiLSTM-SA model achieved the highest F1-score and accuracy, indicating that the removal of one of the basic sentiment classifiers affected the final sentiment classification performance. This also indirectly demonstrates the scientific and rational nature of the Big Five personality theory.
To evaluate the performance of the P-BiLSTM-SA model, a comparison was made with several baseline models, including P-LSTM, P-BiLSTM, BiLSTM-SA, BiLSTM + EMB-ATT [48], and EMCNN [49], all trained on the same dataset. Furthermore, an additional open dataset, NLPCC2013, was used for further evaluation and testing of the models. The results are shown in Table 9 and Table 10.
Compared to the BiLSTM-SA model, P-BiLSTM-SA showed better performance in sentiment classification. Although BiLSTM-SA could capture deeper levels of sentiment information through the self-attention mechanism during model training, it lacked the ability to learn different linguistic expressions of emotions associated with different personality traits, resulting in inferior performance compared to P-BiLSTM-SA. Therefore, the integration of personality factors proved to be beneficial for sentiment classification in Weibo. Compared to P-LSTM, P-BiLSTM obviously had a better classification performance, especially for complex sentences.
The results for P-BiLSTM-SA in terms of accuracy, recall, precision, and F1 score were superior to those of other models, suggesting that pre-classifying Weibo texts according to users’ personalities enabled the self-attention mechanism in the model to learn the deep-seated emotional characteristics of different personalities more effectively. Furthermore, this approach could compensate for BiLSTM’s inability to capture the contextual information of long sentences. Finally, by integrating the outputs of different classifiers, the proposed model was able to reduce the generalization error. As a result, the P-BiLSTM-SA achieved good performance.
The F1 score for sentiment classification was higher for the NLPCC2013 dataset. The authors analyzed the possible reasons and suggested that the Weibo dataset constructed in this paper included texts published from January 2013 to December 2019, and some of those in the test set as well as the NLPCC2013 dataset were from the same period. As Weibo is a social network platform, the texts published during the same period may have similar expressions in terms of word choice and language use, which may have contributed to the better performance of the model on the NLPCC2013 dataset. In addition, with the rapid development of the internet and smartphones, people are exposed to different types of content online, and new words and dialect expressions often appear on Weibo. The NLPCC2013 dataset contained fewer of these words than the Weibo test dataset used in this study, which may have caused the model to make more errors in predicting the polarity of texts containing these words, leading to poorer performance on the Weibo dataset.
In addition, a comparison of the experimental results based on P-BiLSTM-SA and BiLSTM-SA was performed on a selected subset of the data, as shown in Table 11.
From Table 11, we can see that HC personality users tended to be responsible, conscientious, and self-disciplined. In contrast, LC personality users were considered lazy and lacking in self-discipline. Users with the HE personality trait were characterized by their passion and liveliness, while the use of words such as tired and pain were often used by people with the LE personality trait to express negative emotions. Users with an HA personality trait tended to be open and generous in their emotional expression, while users with an LA personality trait often conveyed emotions that were difficult to discern, with negative emotions such as “hehe” more common. Individuals with an HO personality trait were found to have a positive emotional disposition and a passion for life and food, whereas individuals with an LO personality trait were found to lack creativity, curiosity and interest in everything. For example, texts (3) and (5) both described someone’s abilities as good, but their emotional states in terms of agreeableness were different, with one being HA and the other LA, resulting in different expression styles and polarities. Similarly, texts (6) and (7) expressed positive emotions, with high trait individuals tending toward a positive and optimistic expression style, while low trait individuals tended to express the opposite. Thus, despite the fact that the two individuals who posted these texts had similar levels of agreeableness and extraversion, their ways of expressing emotions differed significantly due to differences in their high and low traits. Texts (8) and (11) showed different expressions of neuroticism, with high-neurotic individuals tending to be emotionally unstable, have a higher prevalence of negative emotions, and exaggerate their feelings, whereas low-neurotic individuals tended to be emotionally stable and have positive emotions. According to the results, compared to BiLSTM-SA without personality factors, the P-BiLSTM-SA model was found to be better at learning deep-level emotional information associated with personality expression during training, and thus performed better at classifying the sentiment of Weibo texts.

4. Conclusions and Future Work

In this paper, we proposed a method that combines personality-based BiLSTM and self-attention mechanisms for sentiment analysis of Weibo texts. We constructed a personality lexicon and trained base classifiers for each personality group using BiLSTM and self-attention. Ensemble learning was utilized to integrate the predictions. Our approach achieved high accuracy on both the public NLPCC2013 dataset (82.88%) and our self-constructed Weibo dataset (81.56%). The overlap in the dataset and the presence of new words and dialect expressions on Weibo contributed to the higher accuracy on NLPCC2013. In addition, we constructed a personality lexicon through Weibo texts, which reflects Chinese social culture and linguistic characteristics. However, there were also some limitations, so further research can be conducted from the following aspects:
Create a more comprehensive personality dictionary and sentiment corpus that adapts to the context of the internet. In personality recognition, the personality dataset was not comprehensive, and a majority of young users with both agreeableness and openness represented a higher proportion because their enthusiastic and proactive personality traits made them more willing to participate in questionnaires. As a result, the size of the constructed personality lexicon was relatively small, with only 18 categories. In sentiment classification, the content posted by users may have contained memes that we filtered out during text collection, but memes may reflect users’ emotional states. In the future, it may be possible to use image processing techniques to convert the emotional information conveyed by memes into text. In addition, although we did not build one, a specialized corpus of internet slang with emotional connotations [50] could improve the accuracy of sentiment classification.
Improve framework performance by adding pre-trained models. With the development of artificial intelligence, new pre-trained models such as BERT and ALBERT have been proposed and could be used to improve existing frameworks such as P-BiLSTM-SA, etc.
Conduct aspect-level sentiment analysis research on Weibo texts based on attention mechanisms. Conducting an overall sentiment analysis of Weibo text may obscure its details, and the overall sentiment may not reflect people’s fine-grained sentimental expression of opinion goals. If we only focus on the overall sentiment and ignore the specific details, we may obtain inaccurate results in the recommendation system, question answering, and other real applications.
Conduct multimodal sentiment analysis research by combining text, facial expressions, images, etc. On social platforms similar to Weibo, users’ utterances not only include text, but also contain images, videos, voice, etc. The content represents multiple modalities, and there may be certain interactive relationships between different modalities [51]. For example, emotional information in the text may correspond to visual features such as facial expressions in the images. Considering the aspect-level data of multiple modalities, designing effective cross-modal feature interaction methods by modeling intra-modal and inter-modal information will be a very meaningful research approach in the future that can better explore the relationships between different modalities, thereby reducing the annotated sample size requirement of the model and improving the performance of sentiment analysis.

Author Contributions

Conceptualization, Y.F. and K.L.; methodology, Y.F.; formal analysis, Y.F., L.Z. and R.W.; investigation, W.W. and X.Y.; resources, K.L.; data curation, Y.F. and X.C.; writing—original draft preparation, Y.F.; writing—review and editing, Y.F., L.Z. and X.L.; funding acquisition, K.L. and H.L. All authors have read and agreed to the published version of the manuscript.

Funding

This research was supported by Science and Technology Fund of Sichuan Province (No. 2022NSFSC0556, No. 2023YFQ0044), National Natural Science Foundation of China (No. 62202390), “Chunhui Program” cooperative scientific research project of the Ministry of Education (HZKY20220579), and the Opening Project of Lab of Security Insurance of Cyberspace, Sichuan Province.

Informed Consent Statement

Informed consent was obtained from all subjects involved in the study.

Data Availability Statement

Data are unavailable due to privacy.

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A

LabelCategoriesSome Keywords
0Positive commentsexcellent, rhythm, personality, nice, outgoing, perfect, impression,
performance, sensible, mature, patient, hard-working, calm
1TimeMonday, weekend, morning, next day, evening, midnight, holiday, the day before
2Daily lifecatch up, chat, pass by, steal away, shaking hands, shopping, smoking, discounting
3Relationshipdad, sister, friend, partner, colleague, grandparents, neighbor, baby, husband, brother
4LocationShanghai, Nanjing, Jinan, Thailand, Yantai, train, Wuhan, city, weather
5Cognition processunderstand, choose, question, wonder, confuse, figure out, understand, realize, gradually, familiar, know
6Blessingwishes, happy birthday, good luck, smooth, family happiness, celebration, blessings, wonderful
7Platform activitieshelp, vote, super, idol, popularity, live, red packets, cash, received,
follow, surprise, opportunity
8Positive
emotions
go fighting, life, effort, future, life, happiness, summer, thanks, strength, beautiful, energy, youth
9Body healthstomach, health care, sauna, massage, vitamins, medicine, back pain, soreness, abs, workouts
10Social eventshostage, death, disease, avoidance, sentence, drugging, mediation, victimization, sexual harassment, humiliation, domestic violence
11Negative commentsannoying, shady, stupid, disgusting, angry, hateful, joke, unworthy,
uninstall, self-directed, haters
12Workmeeting, handover, human resources, group, late, work, salary,
retirement, work overtime, boss
13Valuesintegrity, society, honor, shame, nation, spirit, culture, rights, collectivism, ideals and beliefs, moral standards, guidance, discipline
14School lifecollege students, teachers, schools, study, homework, papers,
classmates, preparation for postgraduate entrance examination,
graduation, examination
15Negative emotionsthings, emotions, sad, experience, mood, fear, disappointment, painful, maybe, give up, anxiety, sorrow
16Competitionnational football team, women’s basketball team, table tennis,
Olympic Games, championships, playing, running, winning,
champion, gold
17Foodsbarley, milk, taste, hot pot, rice, orange, burger, egg, cake, coffee,
delicious, seafood, milk tea

References

  1. Lin, J.; Mao, W.; Zeng, D.D. Personality-based refinement for sentiment classification in microblog. Knowl.-Based Syst. 2017, 132, 204–214. [Google Scholar] [CrossRef]
  2. Yuan, C.; Hong, Y.; Wu, J. Personality expression and recognition in Chinese language usage. User Model User-Adapt. Interact. 2021, 31, 121–147. [Google Scholar] [CrossRef]
  3. Lin, Q.; Lu, J.; Ramsay, J.; Yang, S.; Zhu, T. Personality Expression in Chinese Language Use. Int. J. Psychol. 2016, 52, 463–472. [Google Scholar]
  4. Salsabila, G.D.; Setiawan, E.B. Semantic Approach for Big Five Personality Prediction on Twitter. RESTI 2021, 5, 680–687. [Google Scholar] [CrossRef]
  5. Schwartz, H.A.; Eichstaedt, J.C.; Kern, M.L.; Dziurzynski, L.; Ungar, L.H. Personality, Gender, and Age in the Language of Social Media: The Open-Vocabulary Approach. PLoS ONE 2013, 8, e73791. [Google Scholar]
  6. Deng, Y.S.; Hu, H.P.; Xiong, N.X.; Xiong, W.; Liu, L.F. A general hybrid model for chaos robust synchronization and degradation reduction. Inf. Sci. 2015, 305, 146–164. [Google Scholar] [CrossRef]
  7. Liu, D.; Nie, J.; Wan, C.; Liu, X.; Liao, S.; Liao, G. A Classification Based Sentiment Works Extracting Method from Microblogs and Its Feature Engineering. Chin. J. Comput. 2018, 41, 1574–1597. [Google Scholar]
  8. Liu, Y.Q.; Lu, X.Y.; Deng, K.K.; Ruan, D.; Liu, J. Construction method of sentiment lexicon for photography reviews. Comput. Eng. Des. 2019, 40, 3037–3042. [Google Scholar]
  9. Yu, S.; Lu, Q.; Chen, W. Fine-grained Opinion Mining Based on Feature Representation of Domain Sentiment Lexicon. J. Chin. Inf. Process. 2019, 33, 112–121. [Google Scholar]
  10. Lin, Z.; Xie, J.; Yang, T. A method for constructing a multi-topic sentiment lexicon for tourism. Geogr. Geo-Inf. Sci. 2021, 37, 22–27+98. [Google Scholar]
  11. Huang, S.B.; Zeng, Z.W.; Ota, K.; Dong, M.X.; Wang, T.; Xiong, N.N. An Intelligent Collaboration Trust Interconnections System for Mobile Information Control in Ubiquitous 5G Networks. IEEE Trans. Netw. Sci. Eng. 2021, 8, 347–365. [Google Scholar] [CrossRef]
  12. Zeng, Y.Y.; Sreenan, C.J.; Xiong, N.X.; Yang, L.T.; Park, J.H. Connectivity and coverage maintenance in wireless sensor networks. J Supercomput 2010, 52, 23–46. [Google Scholar] [CrossRef]
  13. Wu, C.X.; Ju, B.B.; Wu, Y.; Lin, X.; Xiong, N.X.; Xu, G.Q.; Li, H.Y.; Liang, X.F. UAV Autonomous Target Search Based on Deep Reinforcement Learning in Complex Disaster Scene. IEEE Access 2019, 7, 117227–117245. [Google Scholar] [CrossRef]
  14. Mitra, A.; Biswas, A.; Chakraborty, K.; Ghosh, A.; Das, N.; Ghosh, N.; Ghosh, A. A Machine Learning Approach to Identify Personality Traits from Social Media. Mach. Learn. Deep Learn. Effic. Improv. Healthc. Syst. 2022, 31–59. [Google Scholar] [CrossRef]
  15. Wei, H.; Zhang, F.; Yuan, N.J.; Cao, C.; Fu, H.; Xie, X.; Rui, Y.; Ma, W.-Y. Beyond the words: Predicting user personality from heterogeneous information. In Proceedings of the 10th ACM International Conference on Web Search and Data Mining, Cambridge, UK, 6–10 February 2017; pp. 305–314. [Google Scholar]
  16. Arnoux, P.-H.; Xu, A.; Boyette, N.; Mahmud, J.; Akkiraju, R.; Sinha, V. 25 tweets to know you: A new model to predict personality with social media. In Proceedings of the International AAAI Conference on Web and Social Media, Montreal, QC, Canada, 15–18 May 2017; pp. 472–475. [Google Scholar]
  17. Saad, A.I. Opinion mining on US Airline Twitter data using machine learning techniques. In Proceedings of the 2020 16th International Computer Engineering Conference (ICENCO), Cairo, Egypt, 29–30 December 2020; pp. 59–63. [Google Scholar]
  18. Alzyout, M.; Bashabsheh, E.A.; Najadat, H.; Alaiad, A. Sentiment Analysis of Arabic Tweets about Violence Against Women using Machine Learning. In Proceedings of the 2021 12th International Conference on Information and Communication Systems (ICICS), Valencia, Spain, 24–26 May 2021; pp. 171–176. [Google Scholar]
  19. Jemai, F.; Hayouni, M.; Baccar, S. Sentiment Analysis Using Machine Learning Algorithms. In Proceedings of the 2021 International Wireless Communications and Mobile Computing (IWCMC), Harbin, China, 28 June–2 July 2021; pp. 775–779. [Google Scholar]
  20. Zhang, K.; Xie, Y.; Cheng, Y.; Honbo, D. Sentiment Identification by Incorporating Syntax, Semantics and Context Information. In Proceedings of the International ACM SIGIR Conference on Research & Development in Information Retrieval, Portland, OR, USA, 12–16 August 2012; pp. 1143–1144. [Google Scholar]
  21. Wei, G.; Li, S.; Xue, Y.; Meng, W.; Zhou, G. Semi-supervised Sentiment Classification with Self-training on Feature Subspaces. In Proceedings of the Workshop on Chinese Lexical Semantics; Springer: Cham, Switzerland, 2014; pp. 231–239. [Google Scholar]
  22. Fang, W.W.; Li, Y.C.; Zhang, H.J.; Xiong, N.X.; Lai, J.Y.; Vasilakos, A.V. On the throughput-energy tradeoff for data transmission between cloud and mobile devices. Inf. Sci. 2014, 283, 79–93. [Google Scholar] [CrossRef]
  23. Haque, T.U.; Saber, N.N.; Shah, F.M. Sentiment analysis on large scale Amazon product reviews. In Proceedings of the International Conference on Innovative Research and Development, Bangkok, Thailand, 11–12 May 2018; pp. 1–6. [Google Scholar]
  24. Liu, M.; Deng, J.; Yang, M.; Chen, X.; Liu, N.; Liu, M.; Wang, X. Cost Ensemble with Gradient Selecting for GANs. In Proceedings of the the International Joint Conference on Artificial Intelligence, Vienna, Austria, 23–29 July 2022; pp. 1194–1200. [Google Scholar]
  25. Wang, Z.; Li, T.; Xiong, N.X.; Pan, Y. A novel dynamic network data replication scheme based on historical access record and proactive deletion. J. Supercomput. 2012, 62, 227–250. [Google Scholar] [CrossRef]
  26. Xie, T.; Cheng, X.; Wang, X.; Liu, M.; Deng, J.; Zhou, T.; Liu, M. Cut-Thumbnail: A Novel Data Augmentation for Convolutional Neural Network. In Proceedings of the 29th ACM International Conference on Multimedia, Virtual Event, China, 20–24 October 2021; pp. 1627–1635. [Google Scholar]
  27. Wu, L.; Wu, D.; Liu, M.; Wang, X.; Gong, H. Periodic intermittently connected-based data delivery in opportunistic networks. J. Softw. 2013, 24, 507–525. [Google Scholar] [CrossRef]
  28. Lu, H.; Cheng, X.; Xia, W.; Deng, P.; Liu, M.; Xie, T.; Wang, X.; Liu, M. CyclicShift: A Data Augmentation Method For Enriching Data Patterns. In Proceedings of the 30th ACM International Conference on Multimedia, Lisboa, Portugal, 10–14 October 2022; pp. 4921–4929. [Google Scholar]
  29. Song, Y.; Xin, R.; Chen, P.; Zhang, R.; Chen, J.; Zhao, Z. Identifying Performance Anomalies in Fluctuating Cloud Environments: A Robust Correlative-GNN-based Explainable Approach. Future Gener. Comput. Syst. 2023, 145, 77–86. [Google Scholar] [CrossRef]
  30. Li, N.; Liu, Y.; Wu, Y.; Liu, S.; Zhao, S.; Liu, M. Robutrans: A robust transformer-based text-to-speech model. In Proceedings of the AAAI Conference on Artificial Intelligence, New York, NY, USA, 7–12 February 2020; pp. 8228–8235. [Google Scholar]
  31. Chen, P.; Liu, H.Y.; Xin, R.Y.; Carval, T.; Zhao, J.L.; Xia, Y.N.; Zhao, Z.M. Effectively Detecting Operational Anomalies In Large-Scale IoT Data Infrastructures By Using A GAN-Based Predictive Model. Comput. J. 2022, 65, 2909–2925. [Google Scholar] [CrossRef]
  32. Yang, J.C.; Xiong, N.X.; Vasilakos, A.V.; Fang, Z.J.; Park, D.; Xu, X.H.; Yoon, S.; Xie, S.J.; Yang, Y. A Fingerprint Recognition Scheme Based on Assembling Invariant Moments for Cloud Computing Communications. IEEE Syst. J. 2011, 5, 574–583. [Google Scholar] [CrossRef]
  33. Arbane, M.; Benlamri, R.; Brik, Y.; Alahmar, A.D. Social media-based COVID-19 sentiment classification model using Bi-LSTM. Expert Syst. Appl. 2023, 212, 118710. [Google Scholar] [CrossRef]
  34. Hernandez, R.; Scott, I. Predicting Myers-Briggs type indicator with text. In Proceedings of the 31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA, 4–9 December 2017; pp. 1–9. [Google Scholar]
  35. Zhou, L.; Zhang, Z.; Zhao, L.; Yang, P.J.I.S. Attention-based BiLSTM models for personality recognition from user-generated content. Inf. Sci. 2022, 596, 460–471. [Google Scholar] [CrossRef]
  36. Li, W.; Qi, F.; Yu, Z. Sentiment Classification Method Based on Multi-channel Features and Self-attention. J. Softw. 2021, 32, 2783–2800. [Google Scholar]
  37. Sadr, H.; Nazari Soleimandarabi, M. ACNN-TL: Attention-based convolutional neural network coupling with transfer learning and contextualized word representation for enhancing the performance of sentiment classification. J. Supercomput. 2022, 78, 10149–10175. [Google Scholar]
  38. Kamyab, M.; Liu, G.; Adjeisah, M. Attention-Based CNN and Bi-LSTM Model Based on TF-IDF and GloVe Word Embedding for Sentiment Analysis. Appl. Sci. 2021, 11, 11255. [Google Scholar] [CrossRef]
  39. Feng, Y.; Liu, K.; Li, W. Research on Multi-personality Microblog Sentiment Classification Based on BiLSTM+Self-Attention. J. Xihua Univ. Nat. Sci. Ed. 2022, 41, 67–76. [Google Scholar]
  40. Sitaula, C.; Shahi, T.B. Multi-channel CNN to classify nepali COVID-19 related tweets using hybrid features. arXiv 2022, arXiv:2203.10286. [Google Scholar]
  41. Jin, Z.; Hu, B.; Zhang, R. A Deep Learning Based Mechanism with Sentiment Features for Weibo Sentiment Analysis. Acta Sci. Nat. Univ. 2020, 53, 77–81+86. [Google Scholar]
  42. Chen, L.; Liu, Y.; Zhou, Y.; Wu, Y.; Yu, Z. Incorporating image features of emotions into microblog sentiment classification. J. Sichuan Univ. (Nat. Sci. Ed.). 2021, 58, 68–74. [Google Scholar]
  43. Zhang, Y.; Zhen, J.; Huang, G.; Jiang, Y. Microblog sentiment analysis method based on a double attention model. J. Tsinghua Univ. (Sci. Technol.) 2018, 58, 122–130. [Google Scholar]
  44. Wang, Y.; Zhu, C.; Zhu, J.; Li, Y.; Feng, L.; Liu, J. User Interest Dictionary and LSTM Based Method for Personalized Emotion Classification. Comput. Sci. 2021, 48, 251–257. [Google Scholar]
  45. Zhao, J.; Huang, J.F.; Xiong, N.X. An Effective Exponential-Based Trust and Reputation Evaluation System in Wireless Sensor Networks. IEEE Access 2019, 7, 33859–33869. [Google Scholar] [CrossRef]
  46. John, O.P.; Srivastava, S. BIG FIVE INVENTORY (BFI). Available online: https://fetzer.org/sites/default/files/images/stories/pdf/selfmeasures/Personality-BigFiveInventory.pdf (accessed on 3 March 2020).
  47. Huang, C.L.; Chung, C.K.; Hui, N.; Lin, Y.C.; Pennebaker, J.W. Development of the Chinese linguistic inquiry and word count dictionary. Chin. J. Psychol. 2012, 54, 185–201. [Google Scholar]
  48. Guan, P.; Li, B.; Lv, X.; Zhou, J. Attention Enhanced Bi-directional LSTM for Sentiment Analysis. J. Chin. Inf. Process. 2019, 33, 105–111. [Google Scholar]
  49. Chen, H. Sentiment analysis of natural language processing based on deep learning model. In Proceedings of the International Conference on Internet of Things and Machine Learning (IoTML 2021), Shanghai, China, 17–19 December 2021. [Google Scholar]
  50. Sang, Y.; Shen, H.; Tan, Y.; Xiong, N. Efficient protocols for privacy preserving matching against distributed datasets. In Proceedings of the Information and Communications Security: 8th International Conference (ICICS 2006), Raleigh, NC, USA, 4–7 December 2006. [Google Scholar]
  51. Chen Zhuang, T.Q.; Wanli, L.I.; Zhang, T.; Zhou, S.; Zhong, M.; Zhu, Y.; Liu, M. Low-Resource Aspect-Based Sentiment Analysis: A Survey. Chin. J. Comput. 2023, 46, 1445–1472. [Google Scholar]
Figure 1. Flowchart of personality lexicon construction.
Figure 1. Flowchart of personality lexicon construction.
Electronics 12 03274 g001
Figure 2. Result of the elbow method.
Figure 2. Result of the elbow method.
Electronics 12 03274 g002
Figure 3. Clustering results of K-Means.
Figure 3. Clustering results of K-Means.
Electronics 12 03274 g003
Figure 4. Weibo sentiment classification structure.
Figure 4. Weibo sentiment classification structure.
Electronics 12 03274 g004
Figure 5. Construction of sentiment classifiers based on personality classification.
Figure 5. Construction of sentiment classifiers based on personality classification.
Electronics 12 03274 g005
Figure 6. Ensemble of sentiment classifier prediction results.
Figure 6. Ensemble of sentiment classifier prediction results.
Electronics 12 03274 g006
Figure 7. An example of P-BiLSTM-SA for understanding.
Figure 7. An example of P-BiLSTM-SA for understanding.
Electronics 12 03274 g007
Table 1. Distribution of questionnaires.
Table 1. Distribution of questionnaires.
PersonalityMeanStandard DeviationL (%)M (%)H (%)
A32.674.5829.2947.1323.48
C27.795.0819.5250.6629.82
E22.855.4121.6453.8224.54
N25.904.6221.9057.7820.32
O30.694.9025.3344.8529.82
Table 2. Correlation coefficient between Big Five and different categories.
Table 2. Correlation coefficient between Big Five and different categories.
LabelCategoriesACENO
0Positive comments0.017 *−0.019−0.02 *0.014 **0.012
1Time0.02 *0.049 **0.053−0.07−0.041
2Daily life0.0810.040.0310.049 *0.096
3Relationship−0.0550.0390.053 **0.012−0.108
4Location−0.070.021−0.0730.0160.153 ***
5Cognitive processes−0.0850.029−0.0010.004 *0.052 **
6Blessing0.081 *−0.048−0.08−0.078−0.013
7Platform activities0.015−0.1010.012 **−0.0120.084
8Positive emotions0.064−0.027 *0.0650.013 *−0.039
9Body health0.057 *0.072−0.06 *0.06 *0.101 **
10Social events−0.1050.031−0.024 **0.041 *0.078
11Negative comments−0.054 **−0.05 *−0.0690.055 **−0.025
12Work−0.026 ***0.077 **−0.014−0.005−0.08
13Values−0.015 **0.021−0.031 *−0.0270.044 **
14School life0.035 *0.021 *0.057 **0.0090.043
15Negative emotions−0.089 *0.032−0.0440.018 ***−0.073
16Competition0.129 *−0.110.0350.0150.057
17Foods0.0410.0550.0340.0290.069 ***
Note: * indicates a significance level of 10%; ** represents a significance level of 5%; *** represents a significance level of 1%.
Table 3. Accuracy of personality lexicon and SCLIWC distinguishing personality.
Table 3. Accuracy of personality lexicon and SCLIWC distinguishing personality.
LexiconACENO
Personality lexicon0.72810.70750.69940.68520.7335
SCLIWC0.71530.69710.67910.66340.7028
Table 4. Low–high in personality dimensions.
Table 4. Low–high in personality dimensions.
LexiconModelACENOAverage
Personality lexiconRF0.73660.71480.63790.62870.69520.6826
SVM0.68130.67950.65140.61620.67930.6615
NB0.67810.67430.63350.60950.67750.6546
SCLIWCRF0.65860.65390.61850.58390.60070.6231
SVM0.64370.66410.59720.60940.5940.6217
NB0.62110.63630.56640.56670.57710.5935
Table 5. Personality recognition results for 733 Weibo users.
Table 5. Personality recognition results for 733 Weibo users.
DimensionsACENO
low301421241197252
middle45952922070
high387217463316411
Table 6. Model parameter settings for P-BiLSTM-SA.
Table 6. Model parameter settings for P-BiLSTM-SA.
ParametersValuesParametersValues
Bach-size128Dropout0.5
Hidden_size128lr0.001
Att_size100Epochs300
Table 7. Accuracy of basic sentiment classifiers and P-BiLSTM-SA.
Table 7. Accuracy of basic sentiment classifiers and P-BiLSTM-SA.
classifiersHAHCHEHNHOALL
accuracy0.74840.72180.74080.73820.75050.7315
classifiersLALCLELNLOP-BiLSTM-SA
accuracy0.73510.73600.74610.72970.73680.8156
Table 8. Model experiment results when lacking one personality dimension.
Table 8. Model experiment results when lacking one personality dimension.
IndexP-BiLSTM-SA−A−C−E−N−O
Accuracy0.81560.79320.78500.78420.80260.8002
F1-score0.79450.78210.77420.77510.76930.7798
Note: “−A” in the table represents the results of sentiment classification after removing the classifiers HA and LA when P-BiLSTM-SA integrates the basic sentiment classifier, and the same is true for others.
Table 9. Comparison of experimental results based on constructed dataset.
Table 9. Comparison of experimental results based on constructed dataset.
ModelAccuracyRecallPrecisionF1-Score
BiLSTM-SA0.76580.70370.76470.7329
P-LSTM0.78800.72660.79290.7583
P-BiLSTM0.79370.73000.79780.7624
BiLSTM + EMB-ATT0.78180.74850.78040.7641
EMCNN0.79340.74270.80030.7704
P-BiLSTM-SA0.81560.74250.85440.7945
Table 10. Comparison of experimental results based on NLPCC2013 dataset.
Table 10. Comparison of experimental results based on NLPCC2013 dataset.
ModelAccuracyRecallPrecisionF1-Score
BiLSTM-SA0.77090.71370.77080.7412
P-LSTM0.81640.74310.81920.7793
P-BiLSTM0.81900.74480.82180.7814
BiLSTM + EMB-ATT0.79210.78490.79110.7880
EMCNN0.82110.81840.80350.8109
P-BiLSTM-SA0.82880.84860.82740.8379
Table 11. Examples correctly classified by P-BiLSTM-SA but incorrectly classified by BiLSTM-SA.
Table 11. Examples correctly classified by P-BiLSTM-SA but incorrectly classified by BiLSTM-SA.
No.PersonalitiesWeibo TextsP-BiLSTM-SA
(1)HC HA HOSelf-discipline makes sport purer. Tomorrow is the marathon, why am I feel more excited than I imagined?Positive
(2)LE HC LOActually, I can’t pinpoint the specific reason why, even though I have no worries about food and clothing and have a job, I just feel exhausted, a kind of exhaustion that seems unsolvable.Negative
(3)HE HC HNI finally solved it, I am so awesome, it’s incredible.Positive
(4)LE LO HNFeeling exhausted and in pain, enduring for the sake of those fleeting moments of happiness, damn.Negative
(5)LA LO HNYou’re amazing, hehe.Negative
(6)LA LELiving well, earning money, not starving, and going wherever you want.Positive
(7)HA HEWhen you know what you want, you won’t feel lost.Positive
(8)HN LE HOLiving in constant self-doubt, self-denial, self-encouragement, and self-redemption every day.Negative
(9)HO HE HAShocked! A female college student spent the Qingming holiday watching the replay of a delicious roasted lamb leg live stream in her dorm instead of going out to enjoy the spring scenery!Negative
(10)LC LO LENo desire to study...Negative
(11)LN HO HC HE HASome recent fragments—a regular life is really nice. I haven’t had insomnia lately! The food at the third cafeteria is really delicious! Ordering three dishes for two people is super cost-effective and we can eat until we’re full.Positive
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Liu, K.; Feng, Y.; Zhang, L.; Wang, R.; Wang, W.; Yuan, X.; Cui, X.; Li, X.; Li, H. An Effective Personality-Based Model for Short Text Sentiment Classification Using BiLSTM and Self-Attention. Electronics 2023, 12, 3274. https://doi.org/10.3390/electronics12153274

AMA Style

Liu K, Feng Y, Zhang L, Wang R, Wang W, Yuan X, Cui X, Li X, Li H. An Effective Personality-Based Model for Short Text Sentiment Classification Using BiLSTM and Self-Attention. Electronics. 2023; 12(15):3274. https://doi.org/10.3390/electronics12153274

Chicago/Turabian Style

Liu, Kejian, Yuanyuan Feng, Liying Zhang, Rongju Wang, Wei Wang, Xianzhi Yuan, Xuran Cui, Xianyong Li, and Hailing Li. 2023. "An Effective Personality-Based Model for Short Text Sentiment Classification Using BiLSTM and Self-Attention" Electronics 12, no. 15: 3274. https://doi.org/10.3390/electronics12153274

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop