Next Article in Journal
A Power System Timing Data Recovery Method Based on Improved VMD and Attention Mechanism Bi-Directional CNN-GRU
Next Article in Special Issue
The Design of Intelligent Building Lighting Control System Based on CNN in Embedded Microprocessor
Previous Article in Journal
Research on Algorithm for Improving Infrared Image Defect Segmentation of Power Equipment
Previous Article in Special Issue
Image Analysis of Spatial Differentiation Characteristics of Rural Areas Based on GIS Statistical Analysis
 
 
Article
Peer-Review Record

Short Text Sentiment Classification Using Bayesian and Deep Neural Networks

Electronics 2023, 12(7), 1589; https://doi.org/10.3390/electronics12071589
by Zhan Shi and Chongjun Fan *
Reviewer 1:
Reviewer 3:
Reviewer 4:
Electronics 2023, 12(7), 1589; https://doi.org/10.3390/electronics12071589
Submission received: 3 February 2023 / Revised: 17 March 2023 / Accepted: 23 March 2023 / Published: 28 March 2023
(This article belongs to the Special Issue Heterogeneous and Parallel Computing for Cyber Physical Systems)

Round 1

Reviewer 1 Report

In this paper, authors presented the study about short text sentiment classification based on Bayesian network and deep neural network algorithm. However, there are some limitations that must be addressed as follows.

1.       The title should be changed into “Short text sentiment classification using Bayesian and Deep Neural Networks.”

2.       The first sentence of the abstract is not professionally written (everyone knows DNN is ML technology). In addition, the abstract is not attractive. Some sentences in abstract should be modified to make it more attractive for readers. The achieved results should be more clearly mentioned at the end of abstract.

1.        The introduction is very short, more details should be included. The authors should also mention the sentiment classification works. In addition, the main contribution should be presented in the form of bullets.

3.       In related work, The existing works about sentiment classification should be discussed:’ Traffic accident detection and condition analysis based on social networking data’ and 'Attention-Based Neural Networks for Sentiment Attitude Extraction using Distant Supervision’

4.       The main framework should be included in this paper. I think figure 1 should be revised.

5.       Captions of the Figures not self-explanatory. The caption of figures should be self-explanatory, and clearly explaining the figure. Extend the description of the mentioned figures to make them self-explanatory.

6.       Equations are not properly explained, see equation 11, 12, and 13.

7.       Where is the details about information gain?

8.       In conclusion section, the future work should be included.

9.       The whole manuscript should be thoroughly revised in order to improve its English.

 

Author Response

  1. The title should be changed into “Short text sentiment classification using Bayesian and Deep Neural Networks.”

A: Thank you for your suggestion. We have revised the title of the article.

  1. The first sentence of the abstract is not professionally written (everyone knows DNN is ML technology). In addition, the abstract is not attractive. Some sentences in abstract should be modified to make it more attractive for readers. The achieved results should be more clearly mentioned at the end of abstract.

A: Thank you for your suggestion. We have revised the summary.

  1. The introduction is very short, more details should be included. The authors should also mention the sentiment classification works. In addition, the main contribution should be presented in the form of bullets.

A: In emotional analysis, emotional classification is the most important item. It is based on the emotional information expressed in the text, and divides the text into two or more different categories, that is, a division of the author's attitudes, views and tendencies. Emotional classification is a new research direction, which has very important application value in view mining, information prediction, comment classification, garbage filtering, part of speech tagging, public opinion monitoring, etc. We have revised the introduction.

  1. In related work, The existing works about sentiment classification should be discussed:’ Traffic accident detection and condition analysis based on social networking data’ and 'Attention-Based Neural Networks for Sentiment Attitude Extraction using Distant Supervision’

A: Thank you for your suggestions. We have supplemented the relevant works of the article.

  1. The main framework should be included in this paper. I think figure 1 should be revised.

A: Thank you for your suggestion. We have revised it.

  1. Captions of the Figures not self-explanatory. The caption of figures should be self-explanatory, and clearly explaining the figure. Extend the description of the mentioned figures to make them self-explanatory.

A: Thank you for your suggestion. We have revised its title.

  1. Equations are not properly explained, see equation 11, 12, and 13.

A: Thank you for your suggestion. We have deleted the redundant formula.

  1. Where is the details about information gain?

A: Thank you for your suggestion. We have supplemented it.

  1. In conclusion section, the future work should be included.

A: However, due to the limitations of time and technology, this paper has not carried out a detailed analysis of the problems encountered in the emotional classification of short text, which will be further discussed in the future. We have supplemented it in the conclusion of the article.

  1. The whole manuscript should be thoroughly revised in order to improve its English.

A: Thank you for your suggestion. We have carried out a comprehensive inspection of the article English.

Reviewer 2 Report

In this paper, the authors present an evaluation of a sentiment classification approach based on Deep Neural Networks and Belief Networks.

 

The paper has major issues both from the presentation of the content and the methodology.

 

Please find hereby a list of issues that should be fixed in order to improve the paper:

 

- "This paper mainly introduces the deep neural network algorithm, Bayesian regularization deep belief network, machine learning text sentiment classification, tests the role of the meta-learning method based on deep belief network in text sentiment classification, and makes experimental research and analysis, and concludes the desired conclusion."

 

I think that this claim is an overstatement of the problems that you are tackling in this paper. "introducing" deep neural network, you barely describe what a DNL is in Section 3. The same for the other points. Try to present the problem in a more specific way.

 

- "The innovation of this paper is that this paper re-uses the deep neural network algorithm and establishes the BR-DBN model and tests its performance."

 

Is "re-using" deep neural network innovative? What do you mean by reuse?

 

- "Encoder (AE) appeared in the 1980s,"

 

Why are you using AE as the acronym?

 

The description of the Encoder is way too simplistic. 

 

- "Then the energy function between the visible layer node and the hidden layer node (d, g) is ..."

 

What is the energy function?

 

"This paper constructs a BR DBN model whose bottom is superimposed by multilayer Bayesian regularization RBM (BR RBM)."

 

What is the "bottom"?

 

Figure 3, what is the label "backpropagation" close to the computer to the right?

 

- "Error rate" 

 

What is the definition of error rate?

 

From a methodological point of view, there is no evidence of the use of training/validation/test procedure. Not even in the toy examples with the IRIS datasets.

 

In this case, it is hard to understand what the fine-tuning is (of the hyperparameters, I would guess) adopted by the method.

 

Bibliography is not complete. Missing some important information such as publisher, DOI.

 

Author Response

"This paper mainly introduces the deep neural network algorithm, Bayesian regularization deep belief network, machine learning text sentiment classification, tests the role of the meta-learning method based on deep belief network in text sentiment classification, and makes experimental research and analysis, and concludes the desired conclusion."

 

I think that this claim is an overstatement of the problems that you are tackling in this paper. "introducing" deep neural network, you barely describe what a DNL is in Section 3. The same for the other points. Try to present the problem in a more specific way.

 A: Thank you for your suggestion. We have supplemented it in Section 3.1.

"The innovation of this paper is that this paper re-uses the deep neural network algorithm and establishes the BR-DBN model and tests its performance."

 

Is "re-using" deep neural network innovative? What do you mean by reuse?

 A: Thank you for your suggestion. This sentence is not accurate. We have corrected it.

- "Encoder (AE) appeared in the 1980s,"

 Why are you using AE as the acronym?

 A: Thank you for your suggestion. We have revised it.

The description of the Encoder is way too simplistic. 

 A: Encoder is a device that encodes a signal or data into a signal that can be communicated, transmitted and stored. The encoder converts angular displacement and linear displacement into electrical signals. According to the different reading modes, the encoder can be divided into contact type and non-contact type; According to the working principle of coding, the coding device can be divided into incremental coding device and absolute value coding device. We have already added.

- "Then the energy function between the visible layer node and the hidden layer node (d, g) is ..."

 

What is the energy function?

 Answer: Energy function: regard the things to be clustered as a system, and the degree of difference between things as the energy between system elements. When the energy reaches a certain level, things will form a new class, indicating that the system needs to be reclassified. We have already added.

"This paper constructs a BR DBN model whose bottom is superimposed by multilayer Bayesian regularization RBM (BR RBM)."

 

What is the "bottom"?

 A: Thank you for your suggestion. We have supplemented it in the article.

Figure 3, what is the label "backpropagation" close to the computer to the right?

 Answer: Back propagation is to calculate the partial derivatives of each layer in the opposite direction according to the loss function, so as to update the parameters.

- "Error rate" 

 

What is the definition of error rate?

 A: Error rate refers to the proportion of the number of samples with incorrect classification to the total number of samples.

From a methodological point of view, there is no evidence of the use of training/validation/test procedure. Not even in the toy examples with the IRIS datasets.

 

In this case, it is hard to understand what the fine-tuning is (of the hyperparameters, I would guess) adopted by the method.

 A: Thank you for your suggestion. We have made appropriate additions to the article.

Bibliography is not complete. Missing some important information such as publisher, DOI.

A: Thank you for your suggestion. We have supplemented it.

Reviewer 3 Report

 

Title:

-------

Machine Short Text Sentiment with Bayesian Networks and Deep Neural Network Algorithms

 

Overall Contribution:

-----------------------------

The authors describe a neural network architecture model to perform short text sentiment classification. The paper’s topic is relevant and timely. Yet, many issues need to be revised and clarified by the authors.  

 

Improvements:

--------------------

Comment 1: How is the proposed neural network architecture specific to text classification? How are textual features represented and processed? (please refer to below mentioned references for more information).

 Comment 2: The authors can refer and discuss the below recent students on text classification for comparison:

-       - Supervised term-category feature weighting for improved text classification. Knowl. Based Syst. 261110215 (2023)

-       - Bayesian Text Classification and Summarization via A Class-Specified Topic Model. J. Mach. Learn. Res. 2289:1-89:48 (2021)

-       - Lazy fine-tuning algorithms for naïve Bayesian text classification. Appl. Soft Comput. 96106652 (2020)

-      

Comment 3: How is the proposed neural network architecture specific to sentiment analysis/opinion mining? How are sentiment features represented and processed? (please refer to below mentioned references for information).

Comment 4: The authors can refer and discuss the below recent students on text sentiment analysis for comparison:

-       - KSCB: a novel unsupervised method for text sentiment analysis. Appl. Intell. 53(1)301-311 (2023)

-       - Tailored text augmentation for sentiment analysis. Expert Syst. Appl. 205117605 (2022)

-       - Unsupervised word-level affect analysis and propagation in a lexical knowledge graph. Knowl. Based Syst. 165432-459 (2019)

-      

 Comment 5: The authors need to distinguish between sentiment analysis (positive/negative) and emotion analysis (happy, sad, angry, etc.). They need to specify the sentiment/emotion model features they are using in their study (please refer to reference papers above for more information).

Author Response

Title:

-------

Machine Short Text Sentiment with Bayesian Networks and Deep Neural Network Algorithms

 

Overall Contribution:

-----------------------------

The authors describe a neural network architecture model to perform short text sentiment classification. The paper’s topic is relevant and timely. Yet, many issues need to be revised and clarified by the authors.  

 

Improvements:

--------------------

Comment 1: How is the proposed neural network architecture specific to text classification? How are textual features represented and processed? (please refer to below mentioned references for more information).

 Comment 2: The authors can refer and discuss the below recent students on text classification for comparison:

-       - Supervised term-category feature weighting for improved text classification. Knowl. Based Syst. 261: 110215 (2023)

-       - Bayesian Text Classification and Summarization via A Class-Specified Topic Model. J. Mach. Learn. Res. 22: 89:1-89:48 (2021)

-       - Lazy fine-tuning algorithms for naïve Bayesian text classification. Appl. Soft Comput. 96: 106652 (2020)

-       …

A: Thank you for your suggestion. We have supplemented your literature in the relevant work section of the article.

Comment 3: How is the proposed neural network architecture specific to sentiment analysis/opinion mining? How are sentiment features represented and processed? (please refer to below mentioned references for information).

Comment 4: The authors can refer and discuss the below recent students on text sentiment analysis for comparison:

-       - KSCB: a novel unsupervised method for text sentiment analysis. Appl. Intell. 53(1): 301-311 (2023)

-       - Tailored text augmentation for sentiment analysis. Expert Syst. Appl. 205: 117605 (2022)

-       - Unsupervised word-level affect analysis and propagation in a lexical knowledge graph. Knowl. Based Syst. 165: 432-459 (2019)

-       …

 Comment 5: The authors need to distinguish between sentiment analysis (positive/negative) and emotion analysis (happy, sad, angry, etc.). They need to specify the sentiment/emotion model features they are using in their study (please refer to reference papers above for more information).

A: Thank you for your suggestion. We have supplemented your literature in the relevant work section of the article.

Reviewer 4 Report

The authors should describe with pseudocode the whole algorithm.

The authors should compare their methodology with other similar works presented in section 2.

Some kind of cross validation should be used for the experiments.

A statistical test should be used for the comparison of the examined methods.

The authors should explain why the proposed methodology seems to work well and present some information about the time efficiency of their method.

Author Response

The authors should describe with pseudocode the whole algorithm.

A: Thank you for your suggestion. We have supplemented it.

The authors should compare their methodology with other similar works presented in section 2.

A: Thank you for your suggestion. We have supplemented it.

Some kind of cross validation should be used for the experiments.

A: Thank you for your suggestion. We have supplemented it.

A statistical test should be used for the comparison of the examined methods.

A: Thank you for your suggestion. We have supplemented it.

The authors should explain why the proposed methodology seems to work well and present some information about the time efficiency of their method.

A: Thank you for your suggestion. We have supplemented it.

Round 2

Reviewer 1 Report

The authors have addressed my all comments. I have no further comments. Therefore, this paper can be accepted in its present form.

Author Response

The authors have addressed my all comments. I have no further comments. Therefore, this paper can be accepted in its present form.

Reply: Thank you for your comments. I have made additional revisions to the manuscript.

Reviewer 2 Report

The authors have partially replied to the comments, but the quality of the work is still insufficient.

First, when you reply to the comments, you cannot simply add sentences that sound like out of context or like definitions that look detached from the content.

See for example "Energy function: regard the things to be clustered as a system, and the degree of difference between things as the energy between system elements. When the energy reaches a certain level, things form a new class, indicating that the system needs to be reclassified."

(btw, the reader still needs an explanation of what energy is).

There is still a pending major issue about the optimization of hyperparameters. We do not know anything about training/validation/test.

In addition, it is not clear 1) what the baseline is and 2) if the improvements (over what baseline?) are significant.

Author Response

Reviewer 2

The authors have partially replied to the comments, but the quality of the work is still insufficient.

First, when you reply to the comments, you cannot simply add sentences that sound like out of context or like definitions that look detached from the content.

See for example "Energy function: regard the things to be clustered as a system, and the degree of difference between things as the energy between system elements. When the energy reaches a certain level, things form a new class, indicating that the system needs to be reclassified."

(btw, the reader still needs an explanation of what energy is).

A: Thank you for your suggestion. We have revised it. In the deep belief network, the energy function refers to the function used to calculate the "energy" or "cost" of each state in the network. Energy function is one of the core concepts in deep belief network, which can describe the complexity of the model and the adaptability of the model to data.

There is still a pending major issue about the optimization of hyperparameters. We do not know anything about training/validation/test.

Answer: Hyperparametric optimization refers to the process of finding the optimal combination of hyperparameters in machine learning to improve the performance and effect of the model. Common hyperparametric optimization methods include grid search, random search, Bayesian optimization and automatic machine learning. Among them, grid search is a violent search method, which will train all possible combinations of super parameters; Random search is a method of randomly selecting super parameters, which can achieve the balance between calculation cost and effect; Bayesian optimization is an optimization method based on Bayesian theorem, which can gradually adjust the value range of superparameters according to the performance of known superparameter combinations to find the optimal superparameter combination; Automatic machine learning is an automatic machine learning method, which can automatically select the optimal model and super-parameter combination, and can transfer learning in multiple tasks, thus improving the generalization ability of the model.

In addition, it is not clear 1) what the baseline is and 2) if the improvements (over what baseline?) are significant.

A: Thank you for your suggestion. We screened the full text and found no word "baseline".

Reviewer 3 Report

The authors did a good job in revising the paper and answered most reviewer comments.

Author Response

The authors did a good job in revising the paper and answered most reviewer comments.

Reply: Thank you for your comments. I have made additional revisions to the manuscript.

Reviewer 4 Report

The paper could be accepted in the current form

Author Response

The paper could be accepted in the current form

Reply: Thank you for your comments. I have made additional revisions to the manuscript.

 

Back to TopTop