# Quantum-Inspired Fully Complex-Valued Neutral Network for Sentiment Analysis

^{1}

^{2}

^{*}

## Abstract

**:**

## 1. Introduction

- Based on quantum computation theory and complex-valued neural networks, we propose the theory and architecture of the ComplexQNN.
- We introduce the detailed modules of the ComplexQNN fully based on complex-valued neural networks, including a complex-valued embedding layer, a quantum encoding layer, and a measurement layer.
- The ComplexQNN is evaluated with six sentiment-classification datasets, including binary classification and multi-classification. We adopt two metrics—accuracy and F1-score—to evaluate our model and compare it with five classical neural models (TextCNN, GRU, ELMo, BERT, and RoBERTa). The experimental results show that the ComplexQNN has 10% improved accuracy compared with TextCNN and GRU, and has competitive experimental results with ELMo, BERT, and RoBERTa.

## 2. Related Works

#### 2.1. Preliminary

#### 2.1.1. Quantum State

#### 2.1.2. Quantum System

#### 2.1.3. Quantum State Evolution

#### 2.1.4. Quantum Measurement

#### 2.2. Complex Neural Network

#### 2.2.1. Complex-Valued Linear Layers

**i**is the imaginary part. The complex-valued linear layer is the basic module of the complex-valued neural network, and both the complex-valued CNNs and the complex-valued RNNs rely on this module.

#### 2.2.2. Complex-Valued CNNs

#### 2.2.3. Complex-Valued RNN

#### 2.3. Quantum-Inspired Model

## 3. Materials and Methods

#### 3.1. Datasets

#### 3.2. ComplexQNN

#### 3.2.1. Theory of the ComplexQNN

#### 3.2.2. Architecture of the ComplexQNN

- Complex embedding: The complex embedding is to map the token number corresponding to the word (which is the position of the corresponding integer word in the vocabulary) into the n-dimensional complex vector space, which corresponds to the quantum state construction process in quantum computing. Each word is mapped from discrete space to high-dimensional Hilbert space, which corresponds to a complex-valued column vector.
- Projection: The projection is the mapping of discrete words in a sentence into a complex value space of $n\times n$. In the previous step, plural word embeddings have mapped words into an $n-$dimensional complex-valued vector space. From Equation (25), the density matrix representation of the sentence can be calculated, where the weight $\beta $ of the words can be trained by the attention mechanism, and by default all words take the same weight.
- Evolution: The evolution process is to simulate the change of the quantum system. In the theory of quantum computation, the change of quantum state and density matrix is realized using quantum gates. A quantum gate corresponds to a unitary matrix whose dimension corresponds to the number of qubits it operates on ($n={2}^{m}$). In the ComplexQNN, we simulate the changes of quantum systems through complex-valued linear layers, complex-valued recurrent neural networks and complex-valued convolutional neural networks. The dimensions of the input and output of the evolution module that we designed are both $n\times n$. Therefore, the dimension of the original quantum system will not be changed after learning the features inside the sentence.
- Classifier: The classifier uses the high-dimensional features learned by the previous modules as input to predict the classification result. Based on the theory of quantum computing, we can directly predict the output from the measurement, as shown in Equation (27). It is necessary to construct a set of linearly independent measurement bases. The number of measurement bases depends on the number of classification results performed. The result predicted by the final model takes the label corresponding to the measurement basis with the largest probability value.

#### 3.2.3. Implementation Details of the ComplexQNN

#### 3.2.4. Application in Sentiment Analysis

#### 3.3. Evaluation Metrics

- Accuracy: Accuracy is the ratio of correctly predicted terms to the total terms:$$\begin{array}{c}\hfill \mathrm{Accuracy}=\frac{\mathrm{TP}\phantom{\rule{4.pt}{0ex}}+\phantom{\rule{4.pt}{0ex}}\mathrm{TN}}{\mathrm{TP}\phantom{\rule{4.pt}{0ex}}+\phantom{\rule{4.pt}{0ex}}\mathrm{TN}\phantom{\rule{4.pt}{0ex}}+\phantom{\rule{4.pt}{0ex}}\mathrm{FP}\phantom{\rule{4.pt}{0ex}}+\phantom{\rule{4.pt}{0ex}}\mathrm{FN}}.\end{array}$$TP, TN, FP and FN denote the true positives, true negatives, false positives and false negatives, respectively.
- F1-score: F1-score is the harmonic mean of Precision and Recall:$$\begin{array}{c}\hfill \mathrm{F}1-\mathrm{score}=2\times \frac{Precision\times Recall}{Precision+Recall}.\end{array}$$The calculation of precision and recall is as follows:$$\begin{array}{c}\hfill \mathrm{Precision}=\frac{\mathrm{TP}}{\mathrm{TP}\phantom{\rule{4.pt}{0ex}}+\phantom{\rule{4.pt}{0ex}}\mathrm{FP}},\phantom{\rule{0.166667em}{0ex}}\mathrm{Recall}=\frac{\mathrm{TP}}{\mathrm{TP}\phantom{\rule{4.pt}{0ex}}+\phantom{\rule{4.pt}{0ex}}\mathrm{FN}}.\end{array}$$

## 4. Experimental Results and Discussion

#### 4.1. Comparison Models

#### 4.2. Results and Analysis

- Compared with classical models TextCNN and GRU, popular pretraining models (ELMo, BERT, RoBERTa) and our model has significant advantages. Specifically, we show the improvement effect of the ComplexQNN by comparing the results. The comparison results of the ComplexQNN and TextCNN under the accuracy metric for the considered datasets are: CR ($+12.4\%$), MPQA ($+17.1\%$), MR ($+14.9\%$), SST-2 ($+8.9\%$), SUBJ ($+7.0\%$), SST-5 ($+19.4\%$), average improvement ($+13.28\%$); Comparison results of the ComplexQNN and TextCNN under the F1-score metric: CR ($+14.8$), MPQA ($+10.9$), MR ($+14.4$), SST-2 ($+7.5$), SUBJ ($+7.2$), SST-5 ($+4.3$), average improvement ($+9.85$). The above data show that the ComplexQNN has a great performance improvement compared to TextCNN, because the ComplexQNN is a network model designed based on quantum-computing theory, which can learn more complex text features.
- Compared with the pretraining models, the ComplexQNN has better experimental results than ELMo and BERT. Compared to RoBERTa, ComplexQNN has better results in CR, MPQA, MR, SUBJ and SST-5 (under the F1-score metric): CR ($+0.2$), MPQA ($+0.8$), MR ($+0.4$), SUBJ ($+0.6$), SST-5 ($+0.4$); on the SST-2 dataset, the F1-score result of the ComplexQNN is slightly lower than that of RoBERTa ($-1.3$), but the ComplexQNN has a better experimental result ($+1.6\%$) under the accuracy metric. From a numerical point of view, the improvement of ComplexQNN compared to RoBERTa is small, but in fact this is because RoBERTa has achieved good experimental results on these six datasets, which is why our model is slightly lower than RoBERTa on one of these datasets.
- From Figure 7, we can clearly see that the ComplexQNN is almost always at the highest level (except for slightly lower results in the SST-2 dataset), which indicates that the ComplexQNN has a significant performance advantage over the six sentiment-classification datasets.
- Table 4 shows the training time of six different models on six sentiment-classification datasets. We use an Nvidia 2080Ti GPU and set the batch size as 32. The results are use the format of “minutes:seconds”, and it means that a model costs training time on an epoch. According to this table, we can see that in order of training speed from fast to slow, they are TextCNN, GRU, ELMo, BERT (RoBERTa), and ComplexQNN. The reason about our model need the most training time is that the ComplexQNN uses classical computation to simulate quantum computation. Complex-valued calculations require additional imaginary part parameters to simulate. However, we can see that our model can also complete the training in a very short time.

#### 4.3. Discussion

## 5. Conclusions

## Author Contributions

## Funding

## Data Availability Statement

## Conflicts of Interest

## Abbreviations

CNN | Convolutional Neural Network |

RNN | Recurrent Neural Network |

LM | Language Model |

QLM | Quantum Language Model |

QNN | Quantum Neural Network |

NISQ | Noisy Intermediate-Scale Quantum |

LSTM | Long Short-Term Memory |

GRU | Gated Recurrent Unit |

ELMo | Embeddings from Language Models |

BERT | Bidirectional Encoder Representations from Transformers |

RoBERTa | Robustly Optimized BERT Pretraining Approach |

CR | Customer Review |

MPQA | Multi-Perspective Question Answering |

MR | Movie Review |

SST | Stanford Sentiment Treebank |

SUBJ | Sentence Subjectivity |

VGG | Visual Geometry Group |

ResNet | Residual Network |

YOLO | You Only Look Once |

CTQW | Continuous-Time Quantum Walk |

CNM | Complex-valued Network for Matching |

TextTN | Text Tensor Network |

NNQLM | End-to-End Quantum-like Language Models |

GTN | Generation Tensor Network |

DTN | Discrimination Tensor Network |

ICWE | Interpretable Complex-valued Word Embedding |

CICWE | Convolutional Complex-valued Neural Network based on ICWE |

ComplexQNN | Quantum-inspired Fully Complex-Valued Neural Network |

## References

- Wu, S.; Li, J.; Zhang, P.; Zhang, Y. Natural Language Processing Meets Quantum Physics: A Survey and Categorization. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, Virtually, 7–11 November 2021; pp. 3172–3182. [Google Scholar]
- Zhang, L.; Wang, S.; Liu, B. Deep learning for sentiment analysis: A survey. Wiley Interdiscip. Rev. Data Min. Knowl. Discov.
**2018**, 8, e1253. [Google Scholar] [CrossRef] [Green Version] - Zaib, M.; Zhang, W.E.; Sheng, Q.Z.; Mahmood, A.; Zhang, Y. Conversational question answering: A survey. Knowl. Inf. Syst.
**2022**, 64, 3151–3195. [Google Scholar] [CrossRef] - Celikyilmaz, A.; Clark, E.; Gao, J. Evaluation of text generation: A survey. arXiv
**2020**, arXiv:2006.14799. [Google Scholar] - Vaswani, A.; Shazeer, N.; Parmar, N.; Uszkoreit, J.; Jones, L.; Gomez, A.N.; Kaiser, Ł.; Polosukhin, I. Attention is all you need. In Advances in Neural Information Processing Systems 30, Proceedings of the 31st International Conference on Neural Information Processing Systems, Long Beach, CA, USA, 4–9 December 2017; Curran Associates Inc.: Red Hook, NY, USA, 2017. [Google Scholar]
- Devlin, J.; Chang, M.W.; Lee, K.; Toutanova, K. Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv
**2018**, arXiv:1810.04805. [Google Scholar] - Liu, Y.; Ott, M.; Goyal, N.; Du, J.; Joshi, M.; Chen, D.; Levy, O.; Lewis, M.; Zettlemoyer, L.; Stoyanov, V. Roberta: A robustly optimized bert pretraining approach. arXiv
**2019**, arXiv:1907.11692. [Google Scholar] - Floridi, L.; Chiriatti, M. GPT-3: Its nature, scope, limits, and consequences. Minds Mach.
**2020**, 30, 681–694. [Google Scholar] [CrossRef] - Shor, P.W. Algorithms for quantum computation: Discrete logarithms and factoring. In Proceedings of the 35th Annual Symposium on Foundations of Computer Science, Santa Fe, NM, USA, 20–22 November 1994; pp. 124–134. [Google Scholar]
- Coecke, B.; Sadrzadeh, M.; Clark, S. Mathematical foundations for a compositional distributional model of meaning. arXiv
**2010**, arXiv:1003.4394. [Google Scholar] - Kartsaklis, D.; Fan, I.; Yeung, R.; Pearson, A.; Lorenz, R.; Toumi, A.; de Felice, G.; Meichanetzidis, K.; Clark, S.; Coecke, B. lambeq: An efficient high-level python library for quantum NLP. arXiv
**2021**, arXiv:2110.04236. [Google Scholar] - Lorenz, R.; Pearson, A.; Meichanetzidis, K.; Kartsaklis, D.; Coecke, B. Qnlp in practice: Running compositional models of meaning on a quantum computer. arXiv
**2021**, arXiv:2102.12846. [Google Scholar] - Zhang, P.; Niu, J.; Su, Z.; Wang, B.; Ma, L.; Song, D. End-to-end quantum-like language models with application to question answering. In Proceedings of the AAAI Conference on Artificial Intelligence, New Orleans, LA, USA, 2–7 February 2018; Volume 32. [Google Scholar]
- Li, Q.; Wang, B.; Melucci, M. CNM: An interpretable complex-valued network for matching. arXiv
**2019**, arXiv:1904.05298. [Google Scholar] - Trabelsi, C.; Bilaniuk, O.; Zhang, Y.; Serdyuk, D.; Subramanian, S.; Santos, J.F.; Mehri, S.; Rostamzadeh, N.; Bengio, Y.; Pal, C.J. Deep complex networks. arXiv
**2017**, arXiv:1705.09792. [Google Scholar] - Grover, L.K. A fast quantum mechanical algorithm for database search. In Proceedings of the Twenty-Eighth Annual ACM Symposium on Theory of Computing, Philadelphia, PA, USA, 22–24 May 1996; pp. 212–219. [Google Scholar]
- Wang, X.; Ma, Y.; Hsieh, M.H.; Yung, M.H. Quantum speedup in adaptive boosting of binary classification. Sci. China Phys. Mech. Astron.
**2021**, 64, 220311. [Google Scholar] [CrossRef] - Apers, S.; Chakraborty, S.; Novo, L.; Roland, J. Quadratic speedup for spatial search by continuous-time quantum walk. Phys. Rev. Lett.
**2022**, 129, 160502. [Google Scholar] [CrossRef] [PubMed] - Huang, H.Y.; Broughton, M.; Cotler, J.; Chen, S.; Li, J.; Mohseni, M.; Neven, H.; Babbush, R.; Kueng, R.; Preskill, J.; et al. Quantum advantage in learning from experiments. Science
**2022**, 376, 1182–1186. [Google Scholar] [CrossRef] - Nielsen, M.A.; Chuang, I.L. Quantum computation and quantum information. Phys. Today
**2001**, 54, 60. [Google Scholar] - Whitfield, J.D.; Yan, J.; Wang, W.; Heath, J.T.; Harrison, B. Quantum computing 2022. arXiv
**2022**, arXiv:2201.09877. [Google Scholar] - Chamberland, C.; Campbell, E.T. Universal quantum computing with twist-free and temporally encoded lattice surgery. PRX Quantum
**2022**, 3, 010331. [Google Scholar] [CrossRef] - Li, Q.; Uprety, S.; Wang, B.; Song, D. Quantum-inspired complex word embedding. arXiv
**2018**, arXiv:1805.11351. [Google Scholar] - He, K.; Zhang, X.; Ren, S.; Sun, J. Identity mappings in deep residual networks. In Proceedings of the Computer Vision–ECCV 2016: 14th European Conference, Amsterdam, The Netherlands, 11–14 October 2016; Proceedings, Part IV 14. Springer: Berlin/Heidelberg, Germany, 2016; pp. 630–645. [Google Scholar]
- Hirose, A.; Yoshida, S. Generalization characteristics of complex-valued feedforward neural networks in relation to signal coherence. IEEE Trans. Neural Netw. Learn. Syst.
**2012**, 23, 541–551. [Google Scholar] [CrossRef] - Danihelka, I.; Wayne, G.; Uria, B.; Kalchbrenner, N.; Graves, A. Associative long short-term memory. In Proceedings of the International Conference on Machine Learning, New York, NY, USA, 20–22 June 2016; PMLR: Westminster, UK, 2016; pp. 1986–1994. [Google Scholar]
- Wisdom, S.; Powers, T.; Hershey, J.; Le Roux, J.; Atlas, L. Full-capacity unitary recurrent neural networks. Adv. Neural Inf. Process. Syst.
**2016**, 29. [Google Scholar] - Chen, W.K. Linear Networks And Systems: Algorithms And Computer-Aided Implementations (In 2 Volumes); World Scientific: Singapore, 1990; Volume 3. [Google Scholar]
- LeCun, Y.; Bottou, L.; Bengio, Y.; Haffner, P. Gradient-based learning applied to document recognition. Proc. IEEE
**1998**, 86, 2278–2324. [Google Scholar] [CrossRef] [Green Version] - Krizhevsky, A.; Sutskever, I.; Hinton, G.E. Imagenet classification with deep convolutional neural networks. Commun. ACM
**2017**, 60, 84–90. [Google Scholar] [CrossRef] [Green Version] - Simonyan, K.; Zisserman, A. Very deep convolutional networks for large-scale image recognition. arXiv
**2014**, arXiv:1409.1556. [Google Scholar] - He, K.; Zhang, X.; Ren, S.; Sun, J. Deep residual learning for image recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 27–30 June 2016; pp. 770–778. [Google Scholar]
- Ge, Z.; Liu, S.; Wang, F.; Li, Z.; Sun, J. Yolox: Exceeding yolo series in 2021. arXiv
**2021**, arXiv:2107.08430. [Google Scholar] - O’Shea, K.; Nash, R. An introduction to convolutional neural networks. arXiv
**2015**, arXiv:1511.08458. [Google Scholar] - Tarwani, K.M.; Edem, S. Survey on recurrent neural network in natural language processing. Int. J. Eng. Trends Technol
**2017**, 48, 301–304. [Google Scholar] [CrossRef] - Yu, Y.; Si, X.; Hu, C.; Zhang, J. A review of recurrent neural networks: LSTM cells and network architectures. Neural Comput.
**2019**, 31, 1235–1270. [Google Scholar] [CrossRef] - Sordoni, A.; Nie, J.Y.; Bengio, Y. Modeling term dependencies with quantum language models for ir. In Proceedings of the 36th International ACM SIGIR Conference on Research and Development in Information Retrieval, Dublin, Ireland, 28 July–1 August 2013; pp. 653–662. [Google Scholar]
- Jiang, Y.; Zhang, P.; Gao, H.; Song, D. A quantum interference inspired neural matching model for ad-hoc retrieval. In Proceedings of the 43rd International ACM SIGIR Conference on Research and Development in Information Retrieval, Xi’an, China, 25–30 July 2020; pp. 19–28. [Google Scholar]
- Zhang, P.; Zhang, J.; Ma, X.; Rao, S.; Tian, G.; Wang, J. TextTN: Probabilistic Encoding of Language on Tensor Network. 2021. Available online: https://openreview.net/forum?id=uUTx2LOBMV (accessed on 14 March 2023).
- Zhang, Y.; Liu, Y.; Li, Q.; Tiwari, P.; Wang, B.; Li, Y.; Pandey, H.M.; Zhang, P.; Song, D. CFN: A Complex-valued Fuzzy Network for Sarcasm Detection in Conversations. IEEE Trans. Fuzzy Syst.
**2021**, 29, 3696–3710. [Google Scholar] [CrossRef] - Shi, J.; Li, Z.; Lai, W.; Li, F.; Shi, R.; Feng, Y.; Zhang, S. Two End-to-End Quantum-Inspired Deep Neural Networks for Text Classification. IEEE Trans. Knowl. Data Eng.
**2023**, 35, 4335–4345. [Google Scholar] [CrossRef] - Hu, M.; Liu, B. Mining and summarizing customer reviews. In Proceedings of the Tenth ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, Seattle, WA, USA, 22–25 August 2004; pp. 168–177. [Google Scholar]
- Wiebe, J.; Wilson, T.; Cardie, C. Annotating expressions of opinions and emotions in language. Lang. Resour. Eval.
**2005**, 39, 165–210. [Google Scholar] [CrossRef] - Nivre, J.; De Marneffe, M.C.; Ginter, F.; Goldberg, Y.; Hajic, J.; Manning, C.D.; McDonald, R.; Petrov, S.; Pyysalo, S.; Silveira, N.; et al. Universal dependencies v1: A multilingual treebank collection. In Proceedings of the Tenth International Conference on Language Resources and Evaluation (LREC’16), Portorož, Slovenia, 23–28 May 2016; pp. 1659–1666. [Google Scholar]
- Chen, Z.; Monperrus, M. A literature study of embeddings on source code. arXiv
**2019**, arXiv:1904.03061. [Google Scholar] - Mikolov, T.; Chen, K.; Corrado, G.; Dean, J. Efficient estimation of word representations in vector space. arXiv
**2013**, arXiv:1301.3781. [Google Scholar] - Pennington, J.; Socher, R.; Manning, C.D. Glove: Global vectors for word representation. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), Doha, Qatar, 25–29 October 2014; pp. 1532–1543. [Google Scholar]
- Kim, Y. Convolutional Neural Networks for Sentence Classification. arXiv
**2014**, arXiv:1408.5882. [Google Scholar] [CrossRef] - Peters, M.E.; Neumann, M.; Iyyer, M.; Gardner, M.; Clark, C.; Lee, K.; Zettlemoyer, L. Deep contextualized word representations. arXiv
**2018**, arXiv:1802.05365. [Google Scholar] [CrossRef]

Dataset | Description | Type | Count |
---|---|---|---|

CR | Product reviews | pos/neg | 4k |

MPQA | Opinions | pos/neg | 11k |

MR | Movie reviews | pos/neg | 11k |

SST-2 | Movie reviews | pos/neg | 70k |

SUBJ | Subjectivity | subj/obj | 10k |

SST-5 | Movie reviews | five labels | 11k |

**Table 2.**Experimental results on six benchmarking sentiment-classification datasets evaluated with accuracy.

Model | CR | MPQA | MR | SST-2 | SUBJ | SST-5 |
---|---|---|---|---|---|---|

TextCNN | 78.8 | 74.4 | 75 | 81.5 | 90.3 | 34.9 |

GRU | 80.1 | 84.3 | 76 | 81.6 | 91.7 | 37.6 |

ELMo | 85.4 | 84.4 | 81 | 89.3 | 94.9 | 48 |

BERT | 88.8 | 89.5 | 84.9 | 93 | 95.2 | 52.5 |

RoBERTa | 90.4 | 90.9 | 89.8 | 88.8 | 96.7 | 53.1 |

ComplexQNN | 91.2 | 91.5 | 89.9 | 90.4 | 97.3 | 54.3 |

**Table 3.**Experimental results on six benchmarking sentiment-classification datasets evaluated with F1-score.

Model | CR | MPQA | MR | SST-2 | SUBJ | SST-5 |
---|---|---|---|---|---|---|

TextCNN | 71.2 | 75.5 | 75.9 | 80.9 | 90.1 | 48.8 |

GRU | 71 | 73.3 | 75.5 | 82.3 | 91.8 | 46.7 |

ELMo | 79.9 | 74.2 | 82.3 | 88.4 | 94.9 | 48.3 |

BERT | 85.2 | 82.3 | 84.6 | 88 | 95.2 | 52.5 |

RoBERTa | 85.8 | 85.6 | 89.9 | 89.7 | 96.7 | 52.7 |

ComplexQNN | 86 | 86.4 | 90.3 | 88.4 | 97.3 | 53.1 |

Model | Training Time | |||||
---|---|---|---|---|---|---|

CR | MPQA | MR | SST-2 | SUBJ | SST-5 | |

TextCNN | 0:03 | 0:07 | 0:07 | 0:47 | 0:04 | 0:05 |

GRU | 0:04 | 0:04 | 0:10 | 1:25 | 0:09 | 0:09 |

ELMo | 0:12 | 0:12 | 0:30 | 2:39 | 0:33 | 0:53 |

BERT | 0:12 | 0:15 | 0:33 | 4:31 | 0:38 | 0:35 |

RoBERTa | 0:12 | 0:15 | 0:33 | 4:31 | 0:38 | 0:35 |

ComplexQNN | 0:32 | 0:32 | 1:17 | 10:23 | 1:48 | 1:34 |

Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |

© 2023 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).

## Share and Cite

**MDPI and ACS Style**

Lai, W.; Shi, J.; Chang, Y.
Quantum-Inspired Fully Complex-Valued Neutral Network for Sentiment Analysis. *Axioms* **2023**, *12*, 308.
https://doi.org/10.3390/axioms12030308

**AMA Style**

Lai W, Shi J, Chang Y.
Quantum-Inspired Fully Complex-Valued Neutral Network for Sentiment Analysis. *Axioms*. 2023; 12(3):308.
https://doi.org/10.3390/axioms12030308

**Chicago/Turabian Style**

Lai, Wei, Jinjing Shi, and Yan Chang.
2023. "Quantum-Inspired Fully Complex-Valued Neutral Network for Sentiment Analysis" *Axioms* 12, no. 3: 308.
https://doi.org/10.3390/axioms12030308