Next Article in Journal
Vibration Prediction of Flying IoT Based on LSTM and GRU
Previous Article in Journal
A Lightweight Remote Sensing Image Super-Resolution Method and Its Application in Smart Cities
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Affirmative Ant Colony Optimization Based Support Vector Machine for Sentiment Classification

College of Computer Science and Information Systems, Najran University, Najran 61441, Saudi Arabia
Electronics 2022, 11(7), 1051; https://doi.org/10.3390/electronics11071051
Submission received: 30 January 2022 / Revised: 19 March 2022 / Accepted: 23 March 2022 / Published: 27 March 2022
(This article belongs to the Special Issue Machine Learning in E-services)

Abstract

:
Sentiment analysis is part of contextual text mining, which detects, extracts and supports an organization in understanding their brand or service in social sentiment while monitoring the reviews provided by customers in online shops. The rise of online shopping and digitalization is practically achieved, and the quality of products is tough for users to judge. There is no model to find out about the same or unlike a set of people with similar sentiment analysis concerning online product evaluations. In this paper optimization-based classification algorithm is proposed namely, Affirmative Ant Colony Optimization Based Support Vector Machine (AACOSVM) to classify sentiments provided by customers in online shopping. This paper provides a new Ant Colony Optimization method via providing a novel pheromone model for support vector machine optimization parameters in two steps. The first one is statute of state transition, and the second step is statute of state updates. They aim to allow the ants to use the fake pheromone path to pick parameters and to motivate ants to create subsets having the least classification mistakes. The proposed work includes product review datasets from Amazon to assess the performance of the AACOSVM against existing classifiers, namely, Entropy-Based Classifier (EBC) and Enhanced Feature Attention Network (EFAN). Various review datasets are accessible at Amazon for various items. This research effort has identified a dataset from DVDs, books, kitchen appliances and electronics from the many multiple available review datasets. It utilizes the natural foraging behavior of ants towards searching for food to identify and classify the sentiments present in the product reviews. AACOSVM is evaluated using two standard data mining performance metrics, namely F-Measure and Classification Accuracy. Results indicate that the proposed classification algorithm AACOSVM achieves better F-Measure and Classification Accuracy than the EBC and EFAN classifiers.

1. Introduction

Sentiment Analysis (SA) is a technique for determining the feelings of groups and individuals, including a segment of a brand’s audience or a single client in contact with a customer service agent [1]. Sentiment analysis observes conversations and assesses linguistic and voice expressions to measure attitudes, views, and feelings about a company, service or product, or issue using a scoring method. Sometimes SA is also known as opinion mining [2,3]. The sentiment analysis is an essential component of the global speech analysis system that determines views or sentiments. SA is frequently powered by an algorithm that scores the words spoken together with voices that might reveal the underlying sentiments of a person regarding a subject. SA enables a more objective study of elements that generally are difficult to quantify or generally subjectively judged.
In internet shopping, SA is a powerful tool that allows users to monitor views and emotions between different client segments, such as those who contact a particular number of representatives in shifts, consumers calling on a specific concern, service or product lines and other separate groups [4]. SA may be automated, entirely relying on human analysis or a combination of both. In certain situations, SA is automated mainly by a degree of human supervision that drives master knowledge and enhances algorithms and procedures, especially in early deployment phases. Language is complicated and SA is similarly complex as a method of quantification and measuring of language. What is relatively easy for people to measure subjectively is face-to-face communication. For example, whether a person is happy or sad, smiley or angry with the subject matter at hand, must be transformed into objective, quantifiable scores that account for the numerous nuances in the human language in a debate [5,6]. Here in this study, SA is the practice of evaluating the reviews of online product datasets from Amazon provided by customers to identify whether they are good, negative, or neutral. Thus, sentiment analysis helps to determine how people feel about a product.
The sentiment is a feeling-led attitude, opinion, or judgment. SA examines the feelings of people concerning specific entities. The Internet is an informative place for feelings. From the user’s point of view, users can submit their own material on various social media, such as discussion boards, internet forums, or social networking websites [7]. From a researcher’s point of view, many social media sites provide their Application Software which urges stakeholders, developers and researchers to collect and analyze data deeply. For example, Twitter presently offers three versions of APIs [8], (i) REST API, (ii) Search API and (iii) Streaming API. The developers may collect status and user information through the REST API; the Search API provides developers with the possibility of querying certain Twitter material. The Streaming API can obtain Twitter contents in real-time. In addition, developers can mix the APIs to construct their apps. Therefore, sentimental analysis with the help of vast internet data appears to be firmly based [9].
These forms of internet data, however, have many faults that may prevent the sentiment analysis process. The first defect is that while people can submit their information freely, they cannot enhance the consistency of their ideas. For example, internet spammers publish spam on forums instead of providing subject pertinent thoughts [10]. Some spams are useless, while others include irrelevant views, which are also known as false viewpoints. The second issue is that online data are not always available as a matter of principle. An essential fact is like a tag of something, such as a specific opinion that indicates good, negative, or neutral [11,12].
The Amazon Product Review Dataset is indeed one of the essential truth datasets available to the public. The corpus has 1,6 billion reviews. Each communication is marked with the emoticons found inside the message. Here in this research work, the Affirmative Ant Colony Optimization Based Support Vector Machine (AACOSVM) is proposed to classify sentiments provided by customers to determine how people feel about a product. This research work provides a new Ant Colony technique via providing a novel pheromone model for support vector machine optimization parameters in two steps, the first one is statute of state transition which allows for the ants to use the fake pheromone path to pick parameters. The objective of every artificial ant throughout this ant colony method is to construct a solution subset. The Ants create solutions using a probabilistic decision-making policy to go across neighboring countries. The second step is statute of updates which use to motivate ants to create subsets with the least classification mistakes. This update statute applies solely to the subset of parameters that caused the least error in the latest iteration. This statute of updates increases the pheromone level in the optimal parameter subset. So, the ant that discovers the optimal solution can put pheromones on the set of specified parameters. This decision and the use of the statute of updates will help guide the search which searches the neighborhood with the best state produced by the current algorithm iteration. The statute of updates is only conducted when all the ants have established their answers. The use of the statute of updates increases the level of pheromone.
The rest of the paper is structured as follows. Section 2 presents the related work. Section 3 focuses on the details of an affirmative Ant colony optimization optimizing-based support vector machine while Section 4 outlines the dataset. Section 5 presents the performances metrics. Finally, Section 5 offers conclusions and future directions.

2. Related Work

Prospect Theory [13] is proposed to analyze the relationship among the sentiments and their rating. The loss aversion and the sensitivity of diminishing samples were validated to calculate the sensitivity. The results implicate the utilization of absolute and relative measures for the cognitive bias service recovery. The Freezing Technique [14] is proposed for learning sentiment-based vectors from Long Short Term Memory (LSTM) and Convolutional Neural Network(CNN). It integrates various deep learning methods from which it is observed where the clustered documents work better with the ensemble technique. This method worked better for different datasets. Learning Method [15] is proposed to train the embedding aspects based on the relationship between terms and categories. A cosine metric measure is introduced to study the alleviating limitations. It is used to initialize the existing models to solve the aspect-category sentiment analysis task. The result shows that the embedding aspect of the proposed technique improved the analysis of sentiments efficiently. Deep Study [16] is performed to analyze and explore the primarily utilized sentiments to detect opinions based on the subject element. Various categories of techniques are employed for classifying the text and it is based on the views which are either positive or negative. A two-step process was portrayed as the preliminary step. Public Sentiment Discovery [17] is proposed as a technique to mine Twitter data to analyze the sentiments towards the prediction of stock movement. In this method, textual messages are clustered before it is processed. Apart from that, data mining algorithms are used to elaborate the word list to classify the sentiments.
Propagating Sentiment Signal [18] is proposed to analyze the sentiments in Twitter customer tweets. Estimated polarity tweets are identified based on sentiments where supervised annotation feasibility and the tweet’s polarity are also estimated. Trained data is identified as sufficient for handling the polarity reputation. A Two-Pass classifier [19] is proposed to predict the satisfaction level of a drug. A combination of a Support Vector Machine (SVM) and Artificial Neural Network (ANN) is used to review the customer comments collected in the health care domain. The essential features were extracted from every review and feature vectors are generated. The two-pass classifier is applied for predicting whether the review relies on positive or negative. Implicit Aspect Extraction [20] is proposed to extract the sentiment features at the aspect level. The aspects are specified with explicit words and retrieved through text formats. This method makes utilization of different techniques to identify the implicit characteristics which were classified selected approaches. The issues and limitations of the aspect retrieval are also presented. The Multi-modal Joint Sentiment Topic Model [21] is proposed to analyze the sentiments present in weakly supervised blogs. Latent Dirichlet Allocation is applied to explore the hidden topics and sentiment in messages based on the emotion and personality of microblog users. The experiment shows the effectiveness of unsupervised approaches for measuring the level of accuracy. Enhanced Feature Attention Network (EFAN) [22] is proposed to increase the classification accuracy of target-dependent sentiments. The enhanced representation of features, position-based features of word and speech features were studied. A multi-view network is developed to (i) study the target words modeling (ii) increase target words based on sentiments and context. Experimental studies were performed to validate the model’s efficiency, and the results indicate better accuracy than previous models.
Features Opinions Extraction [23] is proposed to identify the features related to product opinions using text mining concepts. The customer’s opinions are identified and the tweets about such products were used in feature extraction. For validating the effectiveness of feature extraction, customer reviews of different products are tested. Hybrid Ensemble Scheme [24] is proposed to prune the clusters in the text sentiment classification. The ensemble classifiers are applied in separate clusters for the prediction of features. This scheme is tested using balanced datasets and compared with Bagging, Random Subspace and AdaBoost algorithms. The results proved to increase the efficiency and validity of the proposed scheme. The Arabic Language Sentiment Analysis Model [25] is proposed to analyze the feelings and opinions present in different languages, including Arabic. It measures users’ sentiments at all stages wherein the importance of corpus in annotated form was performed. It helped in regulating the phonetic level along with metonymy levels. BilSTM Model [26] is proposed to increase the attention of multi-polarity towards analyzing the implicit sentiments. Initially, the difference between the sentiments and words is identified and a restriction mechanism was adopted to ensure the optimization performance. Experiments were carried out on two exact sentiment datasets and it proved the accurate capturing of sentiment polarities-based features.
Entropy-Based Classifier (EBC) [27] is proposed to perform the feature and opinion classification. It was performed with the source domain for the prediction. Comparison with different product reviews present in various domains is presented where it makes use of modified maximum entropy and bipartite graph clustering. To ensure the effectiveness, the EBC is evaluated with domain-specific and independent words with the help of the SentiWordNet dataset.
In deep learning, attention is one of the most powerful concepts used for improving the performance of the neural network [28,29]. Attention effectively displays mechanisms on “attend to” a specific part of the input sequence, which can be said to be of higher importance [30,31,32,33,34,35].
This proposed method namely, Affirmative Ant Colony Optimization Based Support Vector Machine (AACOSVM) can classify sentiments provided by customers in online shopping by utilizing the way of the natural foraging behavior of ants towards searching for food to identify and classify the sentiments present in the product reviews where it follows different types of rules for optimization to achieves a better performance. However, no optimization or rules are followed in some of the models, such as EBC and EFAN. Furthermore, these models use a sequential classification to obtain better accuracy.

3. Affirmative Ant Colony Optimizing-Based Support Vector Machine

Sentiment Analysis (SA) is the use of evaluating the reviews of product data to determine whether data are positive, negative, or neutral. This would assist to monitor product sentiment in customer feedback and understand customer needs. A Support Vector Machine (SVM) method is used to review the customer comments collected from the dataset. The essential features are extracted from every review and feature vectors are generated. In the following sections, we aim to explain the proposed technique of Affirmative Ant Colony Optimization Based Support Vector Machine (AACOSVM) which helps to classify the sentiments present in the product data.

3.1. Support Vector Machine

The fundamental method of the Support Vector Machine (SVM) is to map input data training data by applying the Q mapping function to an enhanced dimensionality feature. Let the training set used for classification be TS = { u a , v a } | u a ϵ D , v a ϵ ( + 1   o r 1 ) ,   a = ( 1 , 2 , 3 , , e 1 , e ) } where u a represents the vectors of input and v a represents the u a label. The target function ( t f ) of classification is expressed as Equation (1).
t f = { min Q ( x ) = 0.5 ( x · x ) + P P a = 1 e λ a s . t   v a ( x · Q ( c i ) + f ) ( 1 λ a ) ,   a = 1 , 2 , 3 , , e 1 , e ,   λ a < 0
where a = 1, 2, 3,…, e−1, e, λ a < 0 . The penalty parameter is indicated as PP, and λ a indicates slack variable having a value greater than 0.
The task of building the optimum hyperplane has, therefore, become the following problem in quadratic programming:
q p = { m a x E ( z ) = a = 1 e z a 0.5 a , b z a z b v a v b K F ( u a u b ) s . t a = 1 e z a v b = 0 ,   0 < z < P P ,   a = 1 , 2 , 3 , e 1 , e
The function of taking a decision is expressed as Equation (3).
d f ( c ) = [ a = 1 e v a z a K F ( c i   ·   c ) + f ]
The most often utilized SVM kernel functions are as follows:
Radial Basis Function based Kernel
R B F K F ( c , c i ) = exp ( ( c c i ) 2 2 g 2 )
Polynomial based Kernel
P K F ( c , c i ) = ( ( c   · c i ) + d   ) e
Linear Kernel
L K F ( c , c i ) = ( c   · c i )
This research concentrates on RBFKF for its good functionality and widespread use.

3.2. Affirmative Ant Colony Optimization (AACO)

Ant algorithms are optimizing algorithms based on the drilling behavior of genuine wild ants. When they seek nourishment, they start to randomly explore the area around the nest. When an ant finds a food source, it analyses the quantity and quality of food and brings them back to the nest. During the return journey, the ant deposits on the ground a chemical pheromone trail. The amount of pheromone deposited depends on the amount and food quality and leads other ants into the food source. The indirect connection between the axes through pheromone trails allows them to determine the quickest routes between their nest and food. This feature of actual ant colonies is used for solving challenging issues of combinatory optimization in artificial ant colonies. Artificial ants probabilistically construct solutions in ant colony algorithms by taking into consideration dynamic artificial pheromone trails. Affirmative Ant Colony Optimization algorithm’s (AACO) key component is the pheromone model, comprising the transition and update rule, which is utilized for probabilistically sampling the search area. You may define the AACO issue as follows:
Algorithm 1 depicts the structure of a fundamental AACO algorithm. The issues present in AACO are provided at the beginning, and certain variables are initialized. A productive heuristic approach for developing probabilistic solutions is the fundamental component of any AACO algorithm. Ants use a predefined pheromone framework to build probabilistic solutions to the optimization issue at each iteration. The solutions are then utilized to update the pheromones. Daemon movements can sometimes be used for creating centralized activities that a single ant cannot execute. Examples include applying local searches to the solutions designed or gathering global information to determine if it is or is not beneficial to deposit additional pheromones to bias the search procedure from a non-locally based standpoint.
Algorithm 1: Pseudocode of Fundamental AACO Algorithm
Input: Issues present in AACO algorithm I = (ValSet, U, Y)
Output: Identification of best solution
Method:
  • Initialization
  • Construction of solution
  • While termination terms are not fulfilled
  • Scheduling of Activities
  • Construction of ant-based solution
  • Update of Pheromone
  • Actions of daemons
  • Final Scheduling of Activities
  • End while

3.3. Affirmative Ant Colony Optimization Based Support Vector Machine

3.3.1. Parameters

SVM performance refers mainly to the capacity to accurately categorize the unknown data samples using SVM by learning from examples, which is also known as generalization ability. Continuous regularization (PP) and kernel function variables have a significant effect on SVM generalization capability. PP kernel function parameters, such as the RBF kernel bandwidth g will impact the mapping process of the data space and modify the sample distribution difficulty in higher dimensional functionality space. PP establishes a compromise between fitting error minimization and classification margin maximization. Since each parameter’s value, too large or too little, prevents SVM generalization, optimizing parameters is necessary to obtain a strong capacity for generalization in practice. This research article offers an AACO method to optimize PP and g parameters automatically.

3.3.2. Objective Parameter Optimization Function

Support vector machine parameter optimization aims to employ optimized techniques that only explore a small subset of potential values to identify the parameters that minimize the generalization error. This study calculates the absolute validation error. The estimate is uneven and its variance is reduced by reducing the complexity present in the training dataset. If there would be a validation set V a l S e t = { ( u a , v a ) | u a F S , v a L S ,   a = 1 , 2 , 3 , , e 1 , e } with the FS set of features, the LS set is a label, the objective parameter optimization function for SVM is explored via Equation (7).
m i n i m i z e   Y = ( 1 e ) × a = 1 e ( S F × ( v a ) × D F ( u a ) )
where SF represents the step function: if a > 0 then SF(a) = 1 else SF(a) = 0. DF represents support vector machine decision function.

3.3.3. Novel Pheromone Model

Artificial ants probably develop solutions by taking dynamic artificial pheromone trails into account. The core component of the AACO Algorithm is the pheromone model, comprising the statutes of (i) state transition and (ii) updating, which will be used to test the search space probabilities. This research article provides a new AACO method via providing a novel pheromone model for SVM optimization parameters as follows.

3.3.4. Statute of State Transition (SST)

SST allows for the ants to use the fake pheromone path to pick parameters. The objective of every artificial ant throughout this AACO method is to construct a solution subset. The Ants create solutions using a probabilistic decision-making policy to go across neighboring countries. The rule for the state changeover is as follows:
SST a b = β a b a = 1 N β a b

3.3.5. Statute of State Updates (SSU)

The objective of SSU is to motivate ants to create subsets having the least classification mistakes. This update statute applies solely to the subset of parameters that caused the least error in the latest iteration. This SSU increases the pheromone level in the optimal parameter subset. So, the ant that discovers the optimal solution can put pheromones on the set of specified parameters. This decision and the use of the SST will help guide the search which searches the neighborhood with the best state produced by the current algorithm iteration. The SST is only conducted when all the ants have established their answers. The use of the SSU increases the level of pheromone.
β a b n e w = ( 1 e c ) β a b o l d + P I e O b F u n
where ObFun represents the objective function (i.e., Y) in Equation (7), ec represents the coefficient of evaporation, PI represents the intensity of pheromone.
SSU aims to supply the solution set with higher pheromone levels that create fewer classification mistakes that would make selecting them more desirable to future ants. In other words, these suitable parameters lead to fewer classification mistakes and a greater likelihood that the ants would in the future, choose a solution.

4. Dataset and Performance Metrics

4.1. Dataset

A dataset is a collection of related, discrete items of data related which may access be in combination or individually or managed as an entire entity. A dataset is organized inside the database into some type of data structure. A dataset might contain a gathering of business data (names, contact information, sales, salaries and so forth). The database itself can be thought of as a dataset within certain data, such as sales data for a specific department of a company.
This paper includes a product review dataset from Amazon to assess the performance of the AACOSVM against existing classifiers, namely EBC [27] and EFAN [22]. Various review data sets are accessible at Amazon for various items. This research effort has identified a dataset from DVDs books, kitchen appliances and electronics from the many multiple available review datasets. The number of records accessible in the selected datasets is shown in Table 1, the comparative analysis of F-Measure is shown in Table 2 and the comparative analysis of accuracy is shown in Table 3.

4.2. Performance Metrics

This research utilizes the F-Measure and Classification Accuracy to measure the performance of AACOSVM against existing classifiers, namely EBC [27] and EFAN [22]. F-Measure and Classification Accuracy are computed using four variables of confusion matrix namely: (i) False Positive (FalPos) (ii) False Negative (FalNeg) (iii) True Positive (TruPos) (iv) True Negative (TruNeg).
FalPos—Outcome of imprecise prediction of positive class.
FalNeg—Outcome of imprecise prediction of negative class.
TruPos—Outcome of precise prediction of positive class.
TruNeg—Outcome of precise prediction of negative class.

4.2.1. F-Measure Analysis

It is a statistical examination of classification accuracy. Higher results indicate the adequate performance of the classifier. It is expressed mathematically as Equation (10).
F-Measure = 2 T r u P o s ( 2 T r u P o s + F a l P o s + F a l N e g )
In Figure 1, the x-axis is indicated with the product review dataset from amazon, and the y-axis is indicated with F-Measure which is measured in terms of percentage. Figure 1 makes a clear representation that the proposed classifier AACOSVM attains better classification accuracy than EBC and EFAN and the average of F-Measure for these products is 75.83% as shown in Table 2. By following two different types of rules for optimization, AACOSVM achieves a better F-Measure than EBC and EFAN. No optimization or rules are followed in EBC and EFAN which leads to poor F−Measure.

4.2.2. Classification Accuracy

It is the number of adequately predicted cases versus the number of forecasts. It is represented mathematically as Equation (11).
Classification   Accuracy = ( T r u P o s + T r u N e g ) ( T r u P o s + T r u N e g + F a l P o s + F a l N e g )
In Figure 2, the x-axis is indicated with the product review dataset from amazon, and the y-axis is indicated with Classification Accuracy which is measured in terms of percentage. In Figure 2, it is simple to realize that AACOSVM performs better than EBC and EFAN. Enhanced optimized classification helps AACOSVM to obtain better results in Classification Accuracy than EBC and EFAN and the average Classification Accuracy for these products is 76.33% as shown in Table 3. Due to sequentially performing classification, EBC and EFAN attain low Classification Accuracy than AACOSVM.

5. Conclusions and Future Work

The Affirmative Ant Colony Optimization Based Support Vector Machine (AACOSVM) is proposed to classify the sentiments present in the amazon product review dataset. AACOSVM adopts ant characteristics to identify the sentiments in the dataset. To make better classification, two different types of statutes are used, which are (i) State Transition and (ii) State Updates. AACOSVM is evaluated against the existing classification algorithms Entropy-Based Classifier (EBC) and Enhanced Feature Attention Network (EFAN)with F-Measure and Classification Accuracy metrics. Results indicate that AACOSVM achieves better results in classifying the sentiments present in the amazon product review dataset than the EBC and EFAN classifiers. Therefore, the proposed optimization-based classification algorithm determines the sentiments about a product and identifies the features related to product opinions better than other algorithms.
In the future, we will work on better optimization techniques to achieve increased F-Measure and Classification Accuracy. Additionally, the computational and storage complexities of the proposed algorithm will be analyzed. Moreover, we will conduct an examination of hierarchical-attention, self-attention, and self-attention mechanisms under the deep learning infrastructure, such as different dropouts. It constitutes the main engine that can operate in future work.

Funding

This research received no external funding.

Data Availability Statement

All the data is available within the article and relevant software code for this research work are stored in GitHub and can be downloaded from this link. Available online: https://github.com/mohammedam3/Ant (accessed on 19 March 2022).

Conflicts of Interest

The author declares no conflict of interest.

References

  1. Zuheros, C.; Martínez-Cámara, E.; Herrera-Viedma, E.; Herrera, F. Sentiment analysis based multi-person multi-criteria decision making methodology using natural language processing and deep learning for smarter decision aid. Case study of restaurant choice using TripAdvisor reviews. Inf. Fusion 2021, 68, 22–36. [Google Scholar] [CrossRef]
  2. Hu, S.; Kumar, A.; Al-Turjman, F.; Gupta, S.; Seth, S. Reviewer credibility and sentiment analysis based user profile modelling for online product recommendation. IEEE Access 2020, 8, 26172–26189. [Google Scholar] [CrossRef]
  3. Soumya, S.; Pramod, K.V. Sentiment analysis of malayalam tweets using machine learning techniques. ICT Express 2020, 6, 300–305. [Google Scholar]
  4. Alam, M.; Abid, F.; Guangpei, C.; Yunrong, L.V. Social media sentiment analysis through parallel dilated convolutional neural network for smart city applications. Comput. Commun. 2020, 154, 129–137. [Google Scholar] [CrossRef]
  5. Paramanik, R.N.; Singhal, V. Sentiment Analysis of Indian Stock Market Volatility. Procedia Comput. Sci. 2020, 176, 330–338. [Google Scholar] [CrossRef]
  6. Zheng, X.; Lin, Z.; Wang, X.; Lin, K.J.; Song, M. Incorporating appraisal expression patterns into topic modeling for aspect and sentiment word identification. Knowl.-Based Syst. 2014, 61, 29–47. [Google Scholar] [CrossRef]
  7. Liu, N.; Shen, B. Aspect-based sentiment analysis with gated alternate neural network. Knowl.-Based Syst. 2020, 188, 105010. [Google Scholar] [CrossRef]
  8. Bahri, S.; Bahri, P.; Lal, S. A novel approach of sentiment classification using emoticons. Procedia Comput. Sci. 2018, 132, 669–678. [Google Scholar] [CrossRef]
  9. Pergola, G.; Gui, L.; He, Y. TDAM: A topic-dependent attention model for sentiment analysis. Inf. Process. Manag. 2019, 56, 102084. [Google Scholar] [CrossRef] [Green Version]
  10. Vatrapu, R.; Mukkamala, R.R.; Hussain, A.; Flesch, B. Social set analysis: A set theoretical approach to big data analytics. IEEE Access 2016, 4, 2542–2571. [Google Scholar] [CrossRef]
  11. Li, D.; Rzepka, R.; Ptaszynski, M.; Araki, K. HEMOS: A novel deep learning-based fine-grained humor detecting method for sentiment analysis of social media. Inf. Process. Manag. 2020, 57, 102290. [Google Scholar] [CrossRef]
  12. Aljuaid, H.; Iftikhar, R.; Ahmad, S.; Asif, M.; Afzal, M.T. Important citation identification using sentiment analysis of in-text citations. Telemat. Inform. 2021, 56, 101492. [Google Scholar] [CrossRef]
  13. Sharma, A.; Park, S.; Nicolau, J.L. Testing loss aversion and diminishing sensitivity in review sentiment. Tour. Manag. 2020, 77, 104020. [Google Scholar] [CrossRef]
  14. Nguyen, H.T.; Le Nguyen, M. An ensemble method with sentiment features and clustering support. Neurocomputing 2019, 370, 155–165. [Google Scholar] [CrossRef]
  15. Tan, X.; Cai, Y.; Xu, J.; Leung, H.F.; Chen, W.; Li, Q. Improving aspect-based sentiment analysis via aligning aspect embedding. Neurocomputing 2020, 383, 336–347. [Google Scholar] [CrossRef]
  16. Bhadane, C.; Dalal, H.; Doshi, H. Sentiment analysis: Measuring opinions. Procedia Comput. Sci. 2015, 45, 808–814. [Google Scholar] [CrossRef] [Green Version]
  17. Li, B.; Chan, K.C.; Ou, C.; Ruifeng, S. Discovering public sentiment in social media for predicting stock movement of publicly listed companies. Inf. Syst. 2017, 69, 81–92. [Google Scholar] [CrossRef]
  18. Giachanou, A.; Gonzalo, J.; Crestani, F. Propagating sentiment signals for estimating reputation polarity. Inf. Process. Manag. 2019, 56, 102079. [Google Scholar] [CrossRef]
  19. Padmavathy, P.; Mohideen, S.P. An efficient two-pass classifier system for patient opinion mining to analyze drugs satisfaction. Biomed. Signal Process. Control 2020, 57, 101755. [Google Scholar] [CrossRef]
  20. Ganganwar, V.; Rajalakshmi, R. Implicit aspect extraction for sentiment analysis: A survey of recent approaches. Procedia Comput. Sci. 2019, 165, 485–491. [Google Scholar] [CrossRef]
  21. Huang, F.; Zhang, S.; Zhang, J.; Yu, G. Multimodal learning for topic sentiment analysis in microblogging. Neurocomputing 2017, 253, 144–153. [Google Scholar] [CrossRef]
  22. Yang, M.; Qu, Q.; Chen, X.; Guo, C.; Shen, Y.; Lei, K. Feature-enhanced attention network for target-dependent sentiment classification. Neurocomputing 2018, 307, 91–97. [Google Scholar] [CrossRef]
  23. Mars, A.; Gouider, M.S. Big data analysis to Features Opinions Extraction of customer. Procedia Comput. Sci. 2017, 112, 906–916. [Google Scholar] [CrossRef]
  24. Onan, A.; Korukoğlu, S.; Bulut, H. A hybrid ensemble pruning approach based on consensus clustering and multi-objective evolutionary algorithm for sentiment classification. Inf. Process. Manag. 2017, 53, 814–833. [Google Scholar] [CrossRef]
  25. Alsayat, A.; Elmitwally, N. A comprehensive study for Arabic Sentiment Analysis (Challenges and Applications). Egypt. Inform. J. 2021, 21, 7–12. [Google Scholar] [CrossRef]
  26. Wei, J.; Liao, J.; Yang, Z.; Wang, S.; Zhao, Q. BiLSTM with multi-polarity orthogonal attention for implicit sentiment analysis. Neurocomputing 2020, 383, 165–173. [Google Scholar] [CrossRef]
  27. Deshmukh, J.S.; Tripathy, A.K. Entropy based classifier for cross-domain opinion mining. Appl. Comput. Inform. 2018, 14, 55–64. [Google Scholar] [CrossRef]
  28. Vaswani, A.; Shazeer, N.; Parmar, N.; Uszkoreit, J.; Jones, L.; Gomez, A.N.; Polosukhin, I. Attention is all you need. Adv. Neural Inf. Process. Syst. 2017, 30, 5998–6008. [Google Scholar]
  29. Basiri, M.E.; Nemati, S.; Abdar, M.; Cambria, E.; Acharya, U.R. ABCDM: An attention-based bidirectional CNN-RNN deep model for sentiment analysis. Future Gener. Comput. Syst. 2021, 115, 279–294. [Google Scholar] [CrossRef]
  30. Luo, X.; Wang, Z.; Shang, M. An instance-frequency-weighted regularization scheme for non-negative latent factor analysis on high-dimensional and sparse data. IEEE Trans. Syst. Man Cybern. Syst. 2019, 51, 3522–3532. [Google Scholar] [CrossRef]
  31. Wu, H.; Luo, X.; Zhou, M. Advancing Non-Negative Latent Factorization of Tensors with Diversified Regularizations; IEEE: New York, NY, USA, 2020; p. 1. [Google Scholar] [CrossRef]
  32. Tang, J.; Liu, G.; Pan, Q. A review on representative swarm intelligence algorithms for solving optimization problems: Applications and trends. IEEE CAA J. Autom. Sin. 2021, 8, 1627–1643. [Google Scholar] [CrossRef]
  33. Gu, W.; Yu, Y.; Hu, W. Artificial bee colony algorithmbased parameter estimation of fractional-order chaotic system with time delay. IEEE CAA J. Autom. Sin. 2017, 4, 107–113. [Google Scholar] [CrossRef]
  34. Zhang, W.; Zhang, H.; Liu, J.; Li, K.; Yang, D.; Tian, H. Weather prediction with multiclass support vector machines in the fault detection of photovoltaic system. IEEE CAA J. Autom. Sin. 2017, 4, 520–525. [Google Scholar] [CrossRef]
  35. Teng, S.; Wu, N.; Zhu, H.; Teng, L.; Zhang, W. SVM-DT-based adaptive and collaborative intrusion detection. IEEE CAA J. Autom. Sin. 2017, 5, 108–118. [Google Scholar] [CrossRef]
Figure 1. F-Measure vs. AACOSVM.
Figure 1. F-Measure vs. AACOSVM.
Electronics 11 01051 g001
Figure 2. Classification Accuracy vs. AACOSVM.
Figure 2. Classification Accuracy vs. AACOSVM.
Electronics 11 01051 g002
Table 1. Count of Records in Dataset.
Table 1. Count of Records in Dataset.
Amazon Product Review Big DatasetTotal
DVD142,875
Book146,284
Kitchen appliances72,849
Electronics88,137
Table 2. Comparative Analysis of F-Measure.
Table 2. Comparative Analysis of F-Measure.
Domain NameF-Measure (%)
AACOSVM
F-Measure (%)
EFAN
F-Measure (%)
EBC
DVD78.363.155.2
Book75.665.459.7
Kitchen appliances74.566.958.2
Electronics76.963.051.9
Avrage of F-Measure (%)76.3364.6056.25
Table 3. Comparative Analysis of Accuracy.
Table 3. Comparative Analysis of Accuracy.
Domain NameAccuracy (%)
AACOSVM
Accuracy (%)
EFAN
Accuracy (%)
EBC
DVD77.862.654.4
Book75.165.159.3
Kitchen appliances73.866.457.9
Electronics76.662.551.4
Avrage of Accuracy (%)75.8364.1555.75
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Hamdi, M. Affirmative Ant Colony Optimization Based Support Vector Machine for Sentiment Classification. Electronics 2022, 11, 1051. https://doi.org/10.3390/electronics11071051

AMA Style

Hamdi M. Affirmative Ant Colony Optimization Based Support Vector Machine for Sentiment Classification. Electronics. 2022; 11(7):1051. https://doi.org/10.3390/electronics11071051

Chicago/Turabian Style

Hamdi, Mohammed. 2022. "Affirmative Ant Colony Optimization Based Support Vector Machine for Sentiment Classification" Electronics 11, no. 7: 1051. https://doi.org/10.3390/electronics11071051

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop