Quantum Machine Learning algorithm and Large Language Model

A special issue of Algorithms (ISSN 1999-4893). This special issue belongs to the section "Evolutionary Algorithms and Machine Learning".

Deadline for manuscript submissions: closed (31 December 2023) | Viewed by 3840

Special Issue Editors


E-Mail Website
Guest Editor
Department of Cybersecurity, Benedict College, Columbia, SC 29204, USA
Interests: cybersecurity; quantum computing; blockchain and AI

E-Mail Website
Guest Editor
VIPS-TC School of Engineering and Technology, Pitampura, Delhi, India
Interests: AI; ML; pattern recognitionn

Special Issue Information

Dear Colleagues,

In the special issue on quantum machine learning algorithms and the large language model of machine learning, authors are encouraged to create and explore real quantum software that gives better accuracy. It is possible to speculate that quantum computers will outperform traditional computers on machine learning algorithms given that quantum systems produce patterns that are thought to be ineffectively produced by classical systems. The research on quantum machine learning algorithms is concentrated on the creation and usage of actual quantum software that provides these benefits. Submissions covering both the traditional large language model and the innovative practical uses of quantum machine learning were encouraged.

Prof. Dr. Bharat Rawal
Dr. Gopal Chaudhary
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Algorithms is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 1600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • quantum machine learning algorithm
  • artificial intelligence
  • machine learning algorithms
  • large language model

Published Papers (3 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

17 pages, 4086 KiB  
Article
The Impact of Data Preparation and Model Complexity on the Natural Language Classification of Chinese News Headlines
by Torrey Wagner, Dennis Guhl and Brent Langhals
Algorithms 2024, 17(4), 132; https://doi.org/10.3390/a17040132 - 22 Mar 2024
Viewed by 812
Abstract
Given the emergence of China as a political and economic power in the 21st century, there is increased interest in analyzing Chinese news articles to better understand developing trends in China. Because of the volume of the material, automating the categorization of Chinese-language [...] Read more.
Given the emergence of China as a political and economic power in the 21st century, there is increased interest in analyzing Chinese news articles to better understand developing trends in China. Because of the volume of the material, automating the categorization of Chinese-language news articles by headline text or titles can be an effective way to sort the articles into categories for efficient review. A 383,000-headline dataset labeled with 15 categories from the Toutiao website was evaluated via natural language processing to predict topic categories. The influence of six data preparation variations on the predictive accuracy of four algorithms was studied. The simplest model (Naïve Bayes) achieved 85.1% accuracy on a holdout dataset, while the most complex model (Neural Network using BERT) demonstrated 89.3% accuracy. The most useful data preparation steps were identified, and another goal examined the underlying complexity and computational costs of automating the categorization process. It was discovered the BERT model required 170x more time to train, was slower to predict by a factor of 18,600, and required 27x more disk space to save, indicating it may be the best choice for low-volume applications when the highest accuracy is needed. However, for larger-scale operations where a slight performance degradation is tolerated, the Naïve Bayes algorithm could be the best choice. Nearly one in four records in the Toutiao dataset are duplicates, and this is the first published analysis with duplicates removed. Full article
(This article belongs to the Special Issue Quantum Machine Learning algorithm and Large Language Model)
Show Figures

Figure 1

26 pages, 4989 KiB  
Article
Optimizing Multidimensional Pooling for Variational Quantum Algorithms
by Mingyoung Jeng, Alvir Nobel, Vinayak Jha, David Levy, Dylan Kneidel, Manu Chaudhary, Ishraq Islam, Evan Baumgartner, Eade Vanderhoof, Audrey Facer, Manish Singh, Abina Arshad and Esam El-Araby
Algorithms 2024, 17(2), 82; https://doi.org/10.3390/a17020082 - 15 Feb 2024
Viewed by 1206
Abstract
Convolutional neural networks (CNNs) have proven to be a very efficient class of machine learning (ML) architectures for handling multidimensional data by maintaining data locality, especially in the field of computer vision. Data pooling, a major component of CNNs, plays a crucial role [...] Read more.
Convolutional neural networks (CNNs) have proven to be a very efficient class of machine learning (ML) architectures for handling multidimensional data by maintaining data locality, especially in the field of computer vision. Data pooling, a major component of CNNs, plays a crucial role in extracting important features of the input data and downsampling its dimensionality. Multidimensional pooling, however, is not efficiently implemented in existing ML algorithms. In particular, quantum machine learning (QML) algorithms have a tendency to ignore data locality for higher dimensions by representing/flattening multidimensional data as simple one-dimensional data. In this work, we propose using the quantum Haar transform (QHT) and quantum partial measurement for performing generalized pooling operations on multidimensional data. We present the corresponding decoherence-optimized quantum circuits for the proposed techniques along with their theoretical circuit depth analysis. Our experimental work was conducted using multidimensional data, ranging from 1-D audio data to 2-D image data to 3-D hyperspectral data, to demonstrate the scalability of the proposed methods. In our experiments, we utilized both noisy and noise-free quantum simulations on a state-of-the-art quantum simulator from IBM Quantum. We also show the efficiency of our proposed techniques for multidimensional data by reporting the fidelity of results. Full article
(This article belongs to the Special Issue Quantum Machine Learning algorithm and Large Language Model)
Show Figures

Figure 1

12 pages, 896 KiB  
Article
Entanglement Distillation Optimization Using Fuzzy Relations for Quantum State Tomography
by Timothy Ganesan and Irraivan Elamvazuthi
Algorithms 2023, 16(7), 313; https://doi.org/10.3390/a16070313 - 25 Jun 2023
Viewed by 1029
Abstract
Practical entanglement distillation is a critical component in quantum information theory. Entanglement distillation is often utilized for designing quantum computer networks and quantum repeaters. The practical entanglement distillation problem is formulated as a bilevel optimization problem. A fuzzy formulation is introduced to estimate [...] Read more.
Practical entanglement distillation is a critical component in quantum information theory. Entanglement distillation is often utilized for designing quantum computer networks and quantum repeaters. The practical entanglement distillation problem is formulated as a bilevel optimization problem. A fuzzy formulation is introduced to estimate the quantum state (density matrix) from pseudo-likelihood functions (i.e., quantum state tomography). A scale-independent relationship between fuzzy relations in terms of the pseudo-likelihood functions is obtained. The entanglement distillation optimization problem is solved using the combined coupled map lattice and dual annealing approach. Comparative analysis of the results is then conducted against a standard dual annealing algorithmic implementation. Full article
(This article belongs to the Special Issue Quantum Machine Learning algorithm and Large Language Model)
Show Figures

Figure 1

Back to TopTop