Next Article in Journal
A Novel Model Validation Method Based on Area Metric Disagreement between Accelerated Storage Distributions and Natural Storage Data
Previous Article in Journal
Design of Nonlinear Marine Predator Heuristics for Hammerstein Autoregressive Exogenous System Identification with Key-Term Separation
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Quantum Computing-Based Accelerated Model for Image Classification Using a Parallel Pipeline Encoded Inception Module

1
Department of Computer Science, College of Computer Engineering and Sciences, Prince Sattam bin Abdulaziz University, Al-Kharj 11942, Saudi Arabia
2
Department of Software Engineering, College of Computer Engineering and Sciences, Prince Sattam bin Abdulaziz University, Al-Kharj 11942, Saudi Arabia
3
Department of Mathematics, University of Leicester, Leicester LE1 7RH, UK
*
Author to whom correspondence should be addressed.
Mathematics 2023, 11(11), 2513; https://doi.org/10.3390/math11112513
Submission received: 9 April 2023 / Revised: 21 May 2023 / Accepted: 25 May 2023 / Published: 30 May 2023
(This article belongs to the Special Issue Quantum Control and Machine Learning in Quantum Technology)

Abstract

:
Image classification is typically a research area that trains an algorithm for accurately identifying subjects in images that have never been seen before. Training a model to recognize images within a dataset is significant as image classification generally has several applications in medicine, face detection, image reconstruction, etc. In spite of such applications, the main difficulty in this area involves the computation in the classification process, which is vast, leading to slow speed of classification. Moreover, as conventional image classification approaches have fallen short in terms of attaining high accuracy, an optimal model is needed. To resolve this, quantum computing has been developed. Due to their parallel computing ability, quantum-based algorithms could accomplish the classification of vast amounts of image data. This has theoretically confirmed the feasibility and advantages of incorporating a quantum computing-based system with traditional image classification methodologies. Considering this, the present study quantizes the layers of the proposed parallel encoded Inception module to improvise the network performance. This study exposes the flexibility of DL (deep learning)-based quantum state computational methodologies for missing computations by creating a pipeline for denoising, state estimation, and imputation. Furthermore, controlled parameterized rotations are regarded for entanglement, a vital component in quantum perceptron structure. The proposed approach not only possesses the unique features of quantum mechanics, but it also maintains the weight sharing of the kernel. Finally, the MNIST (Modified National Institute of Standards and Technology) and Fashion MNIST image classification outcomes are attained by measuring the quantum state. Overall performance is assessed to prove its effectiveness in image classification.

1. Introduction

Quantum computing is a fast-growing technology that utilizes the laws of quantum mechanics to solve the complicated issues of classical computers. Quantum computers have the ability to solve a particular type of issue quickly in comparison with classical computers, utilizing the advantage of the effects of quantum mechanics such as quantum interference and superposition. Some applications of quantum computers offer a speed boost, which includes ML (machine learning), simulation, and optimization. The improvements create a wide range of opportunities in every aspect of modern life, including binary classification. The idea of classification is presented to capture the abstraction and extent of the problem space that belongs to the given object. The provided set of classes attempts to decide the category of the given object. In many data science disciplines such as information retrieval, recommender system, and data mining, the main process of ML is classification. The efficiency of the classification methods depends on the theory of sets, vector spaces, and probability. The improvement in the efficiency of the classifier has been an important research topic in ML in recent decades [1]. As the data are growing constantly and exponentially, the core challenge lies in the identification of innovative methods that exist in ML. The structure of QM (quantum mechanics) has acted as a way to resolve these challenges. Many physicists have established the power of QM for information processing. While classical computers use two states, namely 0 and 1, the quantum state superposition |0〉 and |1〉 has been used by quantum computers to notice various paths of measurements. Likewise, QM shifts the computational model from bits to quantum bits, which are referred to as qubits. ML algorithms are inspired by the quantum mechanical framework. Hence, it is necessary to use QM in quantum computers to perform the ML processes. In recent decades, different algorithms that use quantum information theory have been widely utilized to focus on different problems such as clustering and classification. A sub-field of ML that utilizes quantum computers to execute the tasks of machine learning is called quantum machine learning. Quantum machine learning has attracted attention from investigators in recent decades. However, recently, various studies have revealed that quantum neural networks have the ability to attain excellent classification for certain datasets. Quantum neural networks is a broad research topic that introduces neural network models on the basis of the mechanics of quantum computing. A quantum neural network has several advantages over the traditional classification models, such as testing accuracy, excellent convergence, and a higher learning rate. There are various techniques of quantum classification algorithms, including quantum support vector machines, quantum nearest neighbor classification, and quantum discriminant analysis. The idea of the quantum k-nearest neighbor algorithm is similar to the concept of traditional k-nearest neighbor. The quantum machine learning model used to categorize the classical data by using quantum resources is called a quantum support vector machine. The discriminant analysis that was created based on the techniques of quantum computing is called quantum discriminant analysis. However, QNNs yield the better accuracy in image classification tasks compared to other classification algorithms. Additionally, QNNs work well with the MNIST dataset due to the increasing capability of quantum computers.
Concurrently, neural networks are considered optimal for learning in conventional computers, yet they possess some drawbacks [2], such as overfitting issues and NP-complete problems. Hence, these limitations have paved the way for quantum-based deep learning (DL) approaches. It is not correct to say that the quantum-based deep learning techniques can always resolve the issue of overfitting. Overfitting is the most frequent and common issue that happens in traditional DL models and is when the model becomes too multifaceted and complicated. This will decrease the performance of the model. Sometimes, the overfitting issue also occurs in quantum-based DL models. However, it is true that significant aspects of quantum computing, such as the utilization of quantum entanglement and the ability to execute computations in parallel, will result in the improvement of advanced techniques for decreasing the overfitting issue in DL models. Therefore, it is recommended to not state that quantum-based DL approaches can entirely resolve the overfitting issues, but instead that they can lead to advanced techniques for enhancing the efficiency and performance of DL models. However, many proposals cannot be applied to modern hardware, and some of them have no correspondence with traditional artificial neural networks. Therefore, deep learning approaches have attained more interest, performing better in near-term quantum computers [3,4]. Though the models have the potential to overtake the traditional models, the severe bottleneck that occurs in training QNNs, which includes the QNNs with random structures, has caused low trainability, leading to the gradient and rate vanishing exponentially to the input qubit number. The applications of larger QNNs have been seriously influenced by the vanishing gradient. A previous study [5] provided a feasible solution with the utmost theoretical guarantees. In particular, the QNN with step-controlled architecture and tree tensor has the gradient that vanished the most polynomially with qubit number. QNNs have established step-controlled structures and tree tensors for binary classification. Simulations have reduced the convergent rates and increased the accuracy when compared with random-structure QNNs. An existing study [6] used a strong binary encoding strategy in that the user can utilize without domain knowledge of CNNs. Therefore, a new quantum-based budding strategy has been utilized to ensure the efficiency of the grown CNN. Finally, the performance of the suggested algorithm is calculated using the accuracy of the classification for the different standard datasets generally utilized in DL. From the experimental outcomes, it is shown that the suggested model has achieved optimal performance in comparison with the conventional methods.
A previous paper [7] used a spiking FFNN (feed forward neural network), which is referred to as Spiking QNN (SQNN), to address the powerful image classification process with the existence of adversarial attack and noise. SQNN has an inbuilt ability to process unanticipated noises that exist in the test images that occurred due to temporal and spatial information. A normal backpropagation algorithm was used in the variational quantum circuit to avoid spikeprop, and spiking time-dependent plasticity (STDP) is found to be inefficient in feedforward SNN (SFNN) training. SQNN is broadly tested on the PennyLane Quantum Simulator. The results show that the SQNN method outperformed the SFNN, AlexNet, ResNet-18, and Random QNN on the unseen test images, which consisted of more noise from the MNIST, CIFAR10, ImageNet, KMNIST, and fashion MNIST datasets. The main aim of another study [8] was to create a pre-convolutional neural network for evaluating the struggles of the recent CNN architectures, which attained success in accordance with classification error. The CNN model is based on two key factors of the real fashioned MNIST dataset, which is augmented by three different types of images that is different from the original one in three ways. The suggested pre-convolutional structure has some parameter numbers with less computational cost and higher accuracy in comparison with the existing strategy.
Regarding the existing method, in CNN, the convolutional layer has been improved by increasing the number of layers to three with max pooling. By using the existing network, the new dataset has been classified, which results in an enhancement in the classification performance of 0.6%, which is better than the accuracy of VGG16. In the future, a new dataset has to be applied to determine the classification methods and various convolutional structures. This proposed technique is unique from other hybrid quantum classical techniques, which usually utilize both quantum and classical computing resources to resolve the considered problem. The main benefit of the proposed technique is its ability to execute parallel computations, which results in faster and more effective classification compared to other classical models. Though existing studies have performed binary classification based on QNNs, by considering the recommended future works and to improve the accuracy to the higher rate, the present study aims to replace and leverage the traditional probability theory with the general quantum probability theory by using an effective algorithm based on a QNN in accordance with the main contributions below:
  • Pre-process the image by resizing it to enhance the data for flexible processing.
  • Quantize the layers of the proposed parallel encoded Inception module to enhance the network performance for effective classification.
  • Reveal the flexibility of DL-based quantum state computing methodologies for missing computations by creating a pipeline for de-noising, imputation, and state estimation.
  • Evaluate the performance of the proposed study with regard to hinge accuracy, hinge loss, accuracy, and loss to confirm the efficacy of this study.

1.1. Significance of Study

The proposed technique emphasizes intensifying the distinctive features of quantum computing to enhance the deep learning model performance, particularly for image classification. In addition, the utilization of entanglement and various quantum mechanics concepts allows complex computations, which are not possible with traditional approaches. In addition, the research suggests a pipeline for denoising, imputation, and state estimation that could resolve various issues related to quantum computing such as limited qubit connectivity and noise. The research also upholds the weight sharing of the kernel, which is a significant aspect in well-established deep learning models. Hence, the proposed technique delivers the optimal use of quantum computing in image classification, and it will provide enhancements in computational efficiency and accuracy compared to other previous hybrid quantum–classical models.

1.2. Paper Organization

The paper is organized as follows, with Section 1 discussing the fundamental notions of quantum computing-based neural networks for binary classification. Following this, a review of conventional works is discussed in Section 2. Then, the overall proposed system with the flow, pseudocode, and processes is presented in Section 3. The overall outcomes attained after the implementation and analysis of the proposed system are included in Section 4. Lastly, the study is concluded in Section 5 with suggestions for the future.

2. Review of Existing Work

Different works undertaken by existing studies for quantum-based binary classification are reviewed in this section, with problem identification.
Quantum parameterized circuits were investigated in a previous study [9] for image recognition by using a quantum deep convolutional neural network (QDCNN). Similar to the classical DCNN, the construction demonstrated the order of the quantum convolutional layer and quantum classified layer. A quantum–classical hybrid training scheme was established to update the parameters in QDCNN, drawing inspiration from the variational quantum algorithm. The analysis of network complexity showed that the existing model shows exponential acceleration compared to the traditional counterpart. Additionally, the German Traffic Sign Recognition Benchmark (GTSRB) and MNIST datasets were used for the numerical simulation, and the validity and feasibility of the model were verified by using the experimental results. Similarly, another study [2] used a quantum discriminator for binary feature extraction of data and assigned them to the appropriate class. Many binary features were obtained by using a training algorithm, and computational complexity was analyzed and explained the generalizability of the suggested model. The results of the XOR attained optimal accuracy of the classification. The quantum algorithm allowed the implementation of the model of general perception by using the qubit-based quantum register. The valued input data were analyzed continuously by the quantum artificial neuron [10,11,12]. For the classification process, the existing continuously valued quantum neuron worked better. By using phase encoding, color translational invariance and noise resilience were leveraged by using neurons. In particular, the activation function applied with the help of the quantum neuron attained 98% accuracy. The Helstrom measurement-based binary classifier gave a complete comparison between the classifier and various commonly utilized classifiers [13]. The important statistical quantities were analyzed by using each algorithm with 14 different datasets. Overall, the new algorithm performed better than the considered classifiers. Another previous study [14] used the quantum flow, which is considered the co-design framework, which connected the missing link. Quantum flow comprises a quantum friendly neural network (QF-NET), which is an automatic device (QF-Map) for generating a theoretically based execution engine, and the quantum circuit of QF-Net to support the QF-Net training on a classical computer. Additionally, rather than utilizing the traditional batch normalization, the quantum-aware batch normalization method was utilized for the QF-Net in order to attain more accuracy in DNNs. From the results, it was shown that the existing model achieved 97.01% accuracy for differentiating three and six digits in the extensively used MNIST dataset. This is nearly 14.55% more than the quantum-aware application. This case study was conducted for the application of binary classification. Processing on ibm_essex (IBM quantum processor) backend and a neural network designed by the quantum flow attained an accuracy of 82%.
A previous study [15] used QuClassi, the construction of multi-class and high-count classification with a certain number of qubits. Experiments were conducted on quantum stimulators, and the performance of IonQ and IBM-Q quantum platforms was determined by accessing Microsoft’s Azure Quantum platform. From the results, it was shown that Quclassi performed better than Tensorflow quantum, quantum-based solutions, and quantum flow for multi-class and binary classification [16]. QuClassi showed better performance in comparison with the conventional DNN and attained 97.37% accuracy with fewer parameters. Likewise, another study [17] aimed to access and compare the performance of the two quantum machine learning (QML) models using datasets such as publicly available datasets, synthetic datasets, and private datasets. Pre-processing was utilized to map the data into quantum states for conducting the quantum-based classification. In particular, the method focused on enhancing the data encoding models, which were outlined by utilizing the IBM Qiskit outline. An amplitude encoding technique assisted in the enhancement of the Variational Quantum Classifier (VQC) model performance. Many experiments have been conducted by utilizing the same parameters and features with VQC, including the amplitude encoding VQC and basis of encoding VQC. An analog quantum computer was used in a previous study [18] for the quantum variational embedded classifier, in which the control signals were varied continuously based on the time and a specific focus were implemented by using the quantum annealers. Thus, in the existing algorithm, the traditional data were changed into time-varying Hamiltonian parameters through a linear transformation of the analog quantum computers. Numerical simulations were performed, which demonstrated the effectiveness of the suggested algorithm in performing the multi-class and binary classification on linear inseparable datasets such as MNIST digits and concentric circles. The recommended method performed better in comparison with the traditional linear classifiers. The performance of the classifier was increased with a higher number of qubits. The existing algorithm utilized quantum annealers to solve practical ML issues, which has been useful for the exploration of quantum advantages in QML.
The parameterized quantum circuit has acted as a basis for the hybrid quantum–classical CNN method [19] and has been used for image classification, which comprised the classical and quantum components.
The quantum convolutional layer was designed to use the parameterized quantum circuit. On the quantum state, the linear unitary transformation was performed to extract hidden information. Moreover, a pooling operation was performed based on the quantum pooling unit. The potential of the existing study has been demonstrated by using the MNIST and HQCNN datasets.
Compared with CNN, the outcomes exposed that a faster training speed was attained by using HQCNN, along with high testing set accuracy. Experimental simulation classified every binary subset present in the MNIST dataset and exposed better performance. A previous study [20] used a hybrid quantum–classical neural network for multiclass and binary classification. The existing method was used for classifying non-trivial datasets (MNIST and finance data [21]). When compared with the pure classical network, more advantages were observed in the various performance measures. Like classical ML, overfitting issues were found in the dataset. Hence, various possibilities have been explored for regularizing the network. The quantum support vector machine (QSVM) method was utilized in another previous study [22] to solve the classification issues, using the benchmark MNIST dataset. The kernel matrix and QSVM variational algorithm were applied to analyze the quantum speedup and physical processor backends. A quantum neural network with a CNN was used in a previous study [23], which utilized two-qubit interactions for the complete algorithm. In many instances, the existing QCNN attained better classification accuracy with a lower number of free parameters. The suggested QCNN algorithm employed in the present work utilizes the shallow depth and fully parameterized quantum circuits, which are appropriate for noisy intermediate scale quantum devices (NISQ) [24]. The method has been investigated for RBM training, utilizing the quantum sampling of D-wave quantum annealers from two generations. The new D-wave advantage QPU model contains more than 5000 qubits and 35,000 couplers to control the qubit connectivity and noise [25]. Therefore, in a previous study, quantum algorithms were designed for the reduction in dimensionality and to classify and connect them to provide a quantum classifier, which was tested on the MNIST dataset. The quantum classifier was stimulated, which includes the errors in quantum processes, and 98.5% accuracy was reached. Quantum classifier running time is polyalgorithmic in the number of data points and dimensions [26]. The main issues identified through the analysis of traditional works are discussed and listed as follows:
  • The conventional studies have had less accuracy, especially since the activation function applied by quantum neurons [10] attained an accuracy of 98%, which is used for discriminating the images of 0 and 1 from the MNIST dataset. By using quantumFlow [14], an accuracy of 82% was obtained, and using QuClassi [15], the accuracy was found to be 97.37%.
  • In quantum-based learning fields, the low qubit representation of quantum data and the implication methods necessitate more investigations for better understanding [15].

3. Proposed Methodology

This research intends to propose a quantum-based neural network for the binary classification of the MNIST and fashion MNIST datasets. Though existing works have endeavored to perform this, they fell short in terms of classification rate. To attain better accuracy, this study proposes methods that follow the sequence of steps shown in Figure 1. Initially, the MNIST dataset is loaded. Then, pre-processing is undertaken. This process transforms raw data into clean data.
If pre-processing is not considered, errors in the data will prevail and reduce the outcome quality. In this phase, the images are resized. Generally, models train more quickly on small images. Moreover, an image that is twice as large needs the network to learn 4 times as many image pixels, and this time adds up. Hence, pre-processing is considered. This is fed into the train and test split. In this case, 80% of training is undertaken, while 20% of testing is performed.
Following this, quantum state mapping is performed, which is utilized to explore an extensive transformational class that a quantum system could undergo.
The standard notation utilized in quantum mechanics to denote the quantum states is called the Dirac notation. There are 3 basic forms of Dirac notation utilized in this paper, which are presented below:
  • Bra–ket notation: In this notation, a quantum state is denoted as a ket vector |ψ〉, while the bra vector 〈ψ| denotes its complex conjugate. For instance, this manuscript utilizes the notations |0〉 and |1〉 to denote the qubit basis states. The complex conjugates of qubits are represented by 〈0| and 〈1|.
  • Density matrix notation: A density matrix ρ represents the quantum state in this notation, which is a Hermitian matrix. It also describes the quantum system probability distribution. This manuscript utilizes the notation to denote the mixed states that are not entirely in a single basis state. For instance, this manuscript utilizes the notation ρAB to denote the density matrix of a two-qubit system.
  • Operator notation: The quantum operators are denoted as matrices that act upon quantum states in this notation. For instance, this manuscript utilizes the notations X and Z to denote the Pauli-X and Pauli-Z operators that are most frequently utilized in quantum computing. These operators work on qubits and are utilized to execute operations such as measurements and rotations.
In order to use these notations consistently and constantly, it is imperative to utilize the necessary symbols and formatting. For instance, bra–ket notation must utilize angled brackets for bra vectors and vertical bars for ket vectors; meanwhile, density matrix notation must utilize boldface font for matrices.
This research quantizes the layers corresponding to the proposed parallel encoded Inception module to improvise the network performance. It also exposes the flexibility of DL-based quantum state computational methodologies for missing computations through the pipeline creation for denoising, state estimation, and imputation. Furthermore, controlled parameterized rotations are taken into account for entanglement, which is a crucial component in quantum-based perceptron structure. On this basis, classification is performed. The overall study is assessed with regard to performance metrics to confirm its efficiency.
In classical computation, the traditional bit seems to be deterministic; it could be ( 0 ) , or it could be (1). The quantum bit could be |0〉 or |1〉, or might also remain superpositional, as given by Equation (1):
|φ〉 = α|0〉i + β|1〉
In Equation (1), ( α ) and ( β ) indicate the complex numbers, and ( α 2 + β 2 = 1 ). When the quantum state (|φ〉) is computed, it will flop to (|0〉) with the probability α 2 and β 2 . In this case, α and β   are termed as probability amplitudes for quantum states. Generally, 2 n states could be concurrently indicated by “ n ” qubits.
In the quantum computing environment, the impact of sequences of unitary qubit state transformations corresponds to logic gates. Hence, quantum devices that comprehend the logical conversions within a specific time are termed quantum gates. These gates are the foundation for understanding quantum computation. The quantum gates indicate computation in the quantum environment. This encompasses the quantum features.

3.1. Single-Qubit Gate

When | φ = ( cos θ 0 sin θ 0 ) , phase rotation could be realized for
R ( θ )   to   | φ   from   R ( θ ) | φ = ( cos θ 0 + θ   sin θ 0 + θ ) , which lies in modifying the weight of a matrix.
During quantum-based computation, the rotation gate is a significant qubit gate, given by Equation (2):
R ( θ ) = ( cos θ sin θ sin θ cos θ )

3.2. Double-Qubit Gate

Double-qubit gates are CNOT (Controlled NOT) gates. They are also termed quantum XOR gates. They have dual inputs ( | x and   | y i ), each with a qubit (thus, two-qubit gate), and the operator that transforms them has to be a matrix with dimensions of (4 × 4). They are given by Equation (3):
C not = ( 1 0 0 1 )
The matrix could undergo conversions such as | 00 | 00 ,   | 01 | 01 , | 01 | 11 ,   | 11 | 10 . In this study, a recently suggested distance-based classifier is extended, and an explicit quantum implementation model is afforded. To accomplish this, a certain operation ( R x ) is constructed, such that
R x | 00 a 0 | 0   + a 1   | 1 + a 2   | 2 + a 3   | 3 | µ x
The probable path to construct ( R x ) involves the use of three rotations ( R y   ( α i ) ), two of which are managed, wherein angles ( α i ) are given as α 1 = arctan | a 2 | 2 + | a 3 | 2 | a 0 | 2 + | a 1 | 2 , α 2 = arctan α 1 α 0   ,   and   a 3 = arctan α 3 α 2 .
The classification issues handle datasets encompassing more than dual data-points. One way to handle this relies on utilizing two qubits, which is valuable for a NISQ device. In this case, the 2nd and 3rd quantum registers possess qubit encoding. In this method, the state synthesis is less complicated, and limited qubits are needed.
It is assumed that for a 2D-quantum system, a generalized category is adopted, and this is given by Equation (4):
R ( k ) = [ cos ( π 2 k 2 α ) sin ( π 2 k 2 α ) sin ( π 2 k 2 α ) cos ( π 2 k 2 α ) ]
In accordance with varied ( k = parameter ) values, the controlled impact comprises three cases, as given by Equation (5):
R ( k ) | φ ) = { ( sin α cos α )   ,   k = 1 ; ( cos α sin α ) ,   k = 0 ; ( cos ( cos ( π 2 k 2 α ) ) sin sin ( π 2 k 2 α ) )   ,   0 < k < 1 .

3.3. QNN (Quantum Neural Network)

A QNN is a natural advancement of a conventional neural computational system. It utilizes substantial quantum computing power to enhance the information processing ability of the neural network. Hence, QNNs afford beneficial assistance for integrating neural computation and quantum computation.
In accordance with quantum states, a quantum neuron-based model is proposed, as shown in Figure 2. The input is given by   | x i , and the output is presented by state probability, wherein | 1 is given by   R   ( θ i ) and the controlled NOT gate ( U ( γ ) ) process the quantum weight gates and thresholds. The definition of ( U ( γ ) = R   ( f γ ) ), R ( k ) has already been defined in Equation (4).
To better comprehend the association between the input and the output of quantum neurons, Equation (6) is given as
i = 1 n R ( θ i ) | x i = ( cos τ sin τ )
In Equation (6), | x i indicates the quantized data after the input data, which are given by Equation (7):
| x i = ( cos τ i sin τ i ) ,   a n d   τ = arg i = 1 n R ( θ i ) | x i = i = 1 n sin ( τ i + θ i ) i = 1 n cos ( τ i + θ i )
After the CNOT gate function, the above resultants are given as per Equation (8):
U ( δ )   i = 1 n R ( θ i ) | x i = ( cos ( π 2 f ( δ ) α ) sin ( π 2 f ( δ ) α ) )
In this study, it is considered that the probability state amplitude ( | 1 ) in the quantum neuron is a result of the distinct layer and is considered as the resultant of the layer. In accordance with the principles based on quantum, the results of the individual network layer are given by Equation (9):
y = sin ( cos ( π 2 f ( δ ) α ) ) = sin ( π 2 f ( δ ) arg ( i = 1 n R ( θ i ) | x i )
In comparison to conventional back propagational neural networks, the present study is a 4-layered neural network. Data indicated in this study are real-numbered. Quantization rules for input data are given below.
A sample of real-value   X = [ x 1 , x 2 , x i ] T could be transformed to | X = [ | x 1 , x 2 ] using Equation (10) below:
| x i = 2 π x i max ( x i ) min ( x i )
In accordance with the above description, mathematical expressions corresponding to the quantum neural networks are given by Equation (11):
h j = sin ( π 2 f ( δ ) arg ( i = 1 n R ( θ i ) | x i )
y k = g ( j = 1 m w ij h j ) = g ( j = 1 m w ij sin ( π 2 f ( δ j ) γ j ) )
In Equation (12), i = 1 ,   2 , nandj = 1 ,   2 , m .
The proposed approach is partitioned into the phases feature extraction and classification. Initially, feature extraction is performed with the use of a pre-trained Inception-V3 model. In this case, the Inception-V3 model is considered, as it utilizes various methodologies to optimize the network to attain better adaptation of the model. It possesses a deep network in comparison to Inception-V1 and Inception-V2 model. However, its speed is not compromised, and it is typically less expensive. After feature extraction, the attained score is passed to quantum learning to perform classification. Then, the 4-layered neural network is trained with a score vector in accordance with selected hyper-parameters to perform classification. X is represents the NOT Gate. The overall schematic process is illustrated in Figure 3.
Feature Extraction for Pre-trained Model: features are considered from an Inception-V3 model that encompasses 256 layers, including 64 convolutional layers, 64 ReLU layers, 96 batch normalization layers, 4 max-pooling layers, 1 global pooling layer, 9 average pooling layers, 1 FC (fully connected) layer, 1 output layer, 18 depth concatenation layers, and 1 Softmax layer. In accordance with the probability, classification is carried out with Softmax.
Quantum computers, similar to traditional computers, use operational gates for regulating and modifying qubit configurations. A unitary matrix might be utilized to explain the transformation from a single quantum state to another state through quantum gates. Qubit execution relies on the hardware framework of these quantum gates. Qubit execution possesses a unique methodology to generate quantum states. Several qubits could be utilized for running the quantum gates. Quantum circuits or a range of various quantum gates functioning on a level higher than a qubit, could be utilized for running a quantum technique. An assumption is made such that a Hadamard gate (H) functions on qubit ( | 0 ), resulting in a qubit within the below superposition state given in Equation (13):
Hadamard   gate   | 0 = 1 2   [ 1 1 1 1 ] ( 1 0 ) = 1 2   ( 1 1 ) = 1 2   ( ( 1 0 ) + ( 0 1 ) ) = 1 2 | 0 + 1 2 | 1
The overall structure of the 4-qubit methodology is shown in Figure 4. In this case, the control is shown by black dots on the schematic exploration of the CNOT gateway, whereas the target is indicated by the “x” symbol inside a circle. When the controlled qubit lies in the | 1 state, the CNOT gateway will invert the quantum state of the target to ( | 0 to | 1 ), and vice versa. The parametric quantum gates function in accordance with parameters that are positioned on the gates. RX , RY , and RZ are parametric gates possessing a functional matrix, as given by Equations (14)–(16):
RX   ( ϕ ) = e i ϕ σ x 2 = ( cos ( ϕ 2 ) i   sin ( ϕ 2 ) i sin ( ϕ 2 ) cos ( ϕ 2 ) ) )
RY   ( ϕ ) = e i ϕ σ y 2 = ( cos ( ϕ 2 ) i sin ( ϕ 2 ) i sin ( ϕ 2 ) cos ( ϕ 2 ) ) )
RZ   ( ϕ ) = e i ϕ σ z 2 = ( e ( ϕ 2 ) 0 0   e ( ϕ 2 ) )
where e indicates the epochs. The X   ( ϕ ) , Y   ( ϕ ) , and Z   ( ϕ ) gates rotate the qubit vector throughout the x-axis, y-axis, and x-axis. Subsequently, the qubit vector is rotated on 3 rotational axes with altering angles through the use of 3 parametric gates. Taking into account the rotational gate ( ϕ ) that has a matrix structure, its function relies on the following parameters given in Equation (17):
R ( ϕ , θ , w ) = [ e i ( ϕ + w ) 2   cos ( θ 2 )   e i ( ϕ w ) 2   sin ( θ 2 )   e i ( ϕ w ) 2   sin ( θ 2 )   e i ( ϕ + w ) 2   cos ( θ 2 )   ]
In Equation (17), w represents the weights. This gate rotates a qubit in the z-axis, then the y-axis, and lastly back to the Z-axis. The Softmax layer of Inception-V3 generates a score that is later fed into the variational quantum model. A 4-qubit framework is used to train the model. The overall visual depiction of this framework is shown in Figure 4.
In this diagram, each vertical line represents a qubit. The gates are represented by labeled boxes, and the connections between the qubits are indicated by lines or wires that link the gates together. The gates themselves can be labeled according to the specific operations being applied to each qubit, such as rotation gates around the z-axis (Rz) and the y-axis (Ry), as we described earlier. Parameter tuning is performed in such a way that the circuit could undergo a unitary evolution, leading to specific intentional conclusions. The parameters considered to train the model are shown in Table 1.
Table 1 shows the learning parameters of a QNN wherein 4 quantum bits, 6 QNN layers, a learning rate of 0.001 and a batch size of 128 are used on 8 training epochs. Lastly, classification outcomes are procured by computing the quantum state.

4. Results and Discussion

The outcomes attained through the execution of the proposed system are discussed in this section. The dataset description, EDA (exploratory data analysis), and performance and comparative analysis outcomes are presented in this section.

4.1. Dataset Description

The present study considered the MNIST and Fashion MNIST datasets. MNIST is a classical image classifying dataset. Several researchers take MNIST as the initial choice to assess the classification rate of various algorithms. Nevertheless, images within this dataset are simple and could not completely reveal the performance of the classifier. Hence, in addition to MNIST, Fashion MNIST is also used. The MNIST dataset includes 10 categories and it is a hand-written digital gray-scale image encompassing 10,000 testing samples and 60,000 training samples. The size of individual images seems to be in the range of (28 × 28). Fashion MNIST is a gray-scale image-based dataset encompassing 70,000 images. It includes 10 categories, with the size of individual images ranging as (28 × 28).

4.2. EDA (Exploratory Data Analysis)

EDA is a strategy to assess data with the use of visualization methodologies. It is utilized for discovering patterns and trends or to validate assumptions with graphical representations and statistical summaries. First, the original MNIST image before downscaling is shown in Figure 5, while the image after downscaling is shown in Figure 6. Subsequently, the original Fashion MNIST image before normalization is shown in Figure 7, while the image after normalization is shown in Figure 8.

4.3. Performance Analysis

The proposed system was assessed for the MNIST and Fashion MNIST datasets in terms of hinge accuracy, hinge loss, accuracy, and loss. The respective outcomes are discussed in this section. First, the results for the MNIST dataset in terms of hinge accuracy and hinge loss are shown in Figure 9 and Figure 10. In Figure 9, Blue line represent the training accuracy and orange line represent the testing accuracy and light blue and light orange line represent the previously obtained accuracy by the model.
From Figure 9, it can be seen that the hinge accuracy seems to vary for different epochs. However, the hinge accuracy seems to increase. In contrast, in Figure 10, it is shown that the epoch loss was found to be low for varying epochs. In which, Blue line represent the training loss and orange line represent the testing loss and light blue and light orange line represent the previously obtained loss by the model. In addition, the overall accuracy and loss of the model for the MNIST dataset are shown in Figure 11 and Figure 12.
From Figure 11, the testing accuracy of the model for the MNIST dataset seems to correlate with training accuracy. Furthermore, the testing loss of the model for the MNIST dataset also seems to correlate with training accuracy. From the analytical outcomes, the model accuracy seems to be high and model loss seems to be low for the MNIST dataset. Additionally, the results for the Fashion MNIST dataset in terms of hinge accuracy and hinge loss are shown in Figure 13 and Figure 14. In Figure 13, Blue line represent the training accuracy and orange line represent the testing accuracy and light blue and light orange line represent the previously obtained accuracy by the model.
From Figure 13, it can been seen that the hinge accuracy seems to vary for different epochs. However, hinge accuracy seems to increase. In contrast, in Figure 14, it is shown that epoch loss was found to be low for varying epochs. In which, Blue line represent the training accuracy and orange line represent the testing loss and light blue and light orange line represent the previously obtained loss by the model. Additionally, the overall accuracy and loss of the model for the Fashion MNIST dataset are shown in Figure 15 and Figure 16.
From Figure 15, the testing accuracy of the model for the Fashion MNIST dataset seems to correlate with training accuracy. Moreover, as shown in Figure 16, the testing loss of the model for the Fashion MNIST dataset, seems to correlate with training accuracy. From the overall analytical outcomes, the model accuracy seems to be high and model loss seems to be low for the Fashion MNIST dataset. This reveals the better performance of the proposed system.

4.4. Internal Comparison

The proposed system is internally compared with regard to hinge accuracy and hinge loss for the Fashion MNIST and MNIST datasets. The attained outcomes are discussed in this section. The results are shown in Table 2, with corresponding results shown in Figure 17 and Figure 18.
From the analytical outcomes, the hinge accuracy for the Fashion MNIST dataset is found to be 0.9996, while the hinge accuracy for the MNIST dataset is shown to be 0.9597. On the other hand, the hinge loss for Fashion MNIST is found to be 0.0101, while the hinge loss for MNIST is shown to be 0.1655. Though the proposed system shows better performance for both datasets, performance seems to be higher for the Fashion MNIST dataset than the MNIST dataset.

4.5. Comparative Analysis

The proposed work has been assessed through comparison with three recently published studies [27,28] to demonstrate the better performance of the proposed system than conventional works. The proposed approach utilizes the parallel encoded Inception module with quantized layers to enhance the efficiency of the network. It also integrates DL-based quantum state computational approaches for state estimation, denoising, and imputation. The proposed technique uses controlled parameterized rotations for entanglement and manages the weight sharing of the kernel.
PCA-VQC [27]: The PCA-VQC method utilizes PCA (principle component analysis) to decrease the input data dimensionality prior to training the variational quantum classifier. The training of the VQC is achieved by utilizing the optimization algorithm of classical quantum, along with the goal of reducing the loss function on a decreased dataset.
MPS-VQC [27]: The MPS-VQC method utilizes matrix product state (MPS) techniques to denote quantum states prior to training a VQC. The approach of MPS-VQC has been revealed to be more effective when compared to standard VQC techniques for various kinds of quantum data.
The proposed technique performs better than PCA-VQC and MPS-VQC with respect to the Fashion MNIST and MNIST datasets. Furthermore, VQC4 and VQC2 denote the two different versions of variational quantum classifiers. The primary difference between VQC4 and VQC2 is the number of qubits utilized in the quantum circuits, which impacts the performance and complexity of VQCs for classification tasks. This, in turn, proves that VQC4 can execute the quantum computations on four input qubits and is appropriate for classification tasks that require a higher-dimensional feature space.
The procured results are discussed in this section. Initially, a comparison is made with existing methods, namely PCA-VQC (principal component analysis–variational quantum circuits) and MPS-VQC (matrix product state–variational quantum circuits), for the MNIST dataset. The corresponding outcomes are shown in Figure 19 and Figure 20.
From the analytical results, the accuracy for the existing MPS-VQC is shown to be 99.44%, while that of PCA-VQC is revealed to be 87.34%, and the proposed system has 99.97% accuracy. In contrast, the loss for MPS-VQC is shown to be 0.3183, while the proposed system shows a lower loss rate of 0.0189. Following this, a comparison is made with traditional methods, namely PCA-VQC and MPS-VQC, for the Fashion MNIST dataset. The respective outcomes are shown in Table 3, with their equivalent graphical depiction in Figure 21 and Figure 22.
From the analytical outcomes, the accuracy for the existing MPS-VQC4 is shown to be 96.05%, while that of PCA-VQC4 is revealed to be 85.35%, and the proposed system has 99.32% accuracy. In contrast, loss for MPS-VQC4 has been shown to be 0.3561, while the proposed system has a lower loss rate of 0.0157. Following this, a comparison is made with MPS-VQC for the Fashion MNIST dataset. The respective outcomes are shown in Table 4, with their equivalent graphical depiction in Figure 23.
In Table 4 and Figure 23, it is shown that existing models such as MPS-VQC have 96% accuracy, while the proposed system has 99.32% accuracy. Hence, from the comparative analysis, the proposed work has shown better performance in comparison to conventional works. Typically, QNNs utilize substantial quantum computing power to improvise the information processing capability of neural networks. Thus, QNNs bring benefits by integrating neural computation with quantum computation. In such a case, the Inception-V3 model is considered as an efficient model, as it makes use of various methodologies for optimizing the network so as to attain better adaptation of the model. Due to such advantages, the proposed system gained the ability to work better than the conventional system, which is confirmed through the results.

5. Conclusions

This study aimed to use the quantum-based Inception module to classify the MNIST and Fashion MNIST datasets. It utilizes classes 5 and 7 for the binary classification for both the Fashion MNIST and MNIST datasets. To accomplish this, the layers of the proposed parallel encoded Inception module are quantized to improvise the performance of the network for effective classification. The main contributions of this study are:
  • Quantization of the layers of the Inception module: This includes mapping the continual-valued weights in the Inception module to separate quantum states, which, in turn, allows for more effective computation and possibly higher accuracy.
  • Pipeline for state estimation, denoising, and imputation: The proposed technique involves a pipeline that addresses the challenges related to noise and constrained qubit connectivity in quantum computing. This pipeline involves denoising to eliminate undesired quantum noise, state estimation to deduce the quantum state of systems from noisy measurements, and imputation to fill in the missing data if qubits flop.
  • Controlled parameterized rotations for entanglement: The key concept of quantum mechanics is entanglement, which allows for several complex computations, which are possible even with traditional systems. The proposed technique involves controlled parameterized rotations to produce entanglement and enhance the efficiency of the quantum perceptron structure.
  • Weight sharing of the kernel: The proposed technique manages the concept of weight sharing of the kernel in classical deep learning models, which is significant for decreasing the quantity of parameters. Thus, it enhances the efficiency and performance of the model.
Overall, these contributions demonstrate the advanced usage of quantum computing in image classification, which emphasizes and enhances the accuracy and efficiency of existing models. However, future investigation is required to fully evaluate the performance of the proposed technique.
The flexible nature of DL-based quantum state computational methodologies was exposed to missing computations through the creation of a pipeline for imputation, denoising, and state estimation. The proposed study was evaluated in accordance with hinge accuracy, hinge loss, accuracy, and loss. Moreover, a comparison was undertaken with three recent conventional studies for the MNIST and Fashion MNIST datasets. From the outcomes, it was found that the proposed system achieved 99.32% accuracy for the Fashion MNIST dataset, while 99.97% accuracy was attained for the MNIST dataset. The loss rate was also found to be lower for the proposed system in comparison to the conventional system. Hence, the outcomes confirmed the better performance of the proposed work. In the future, quantum architecture can be modified and time complexity must be decreased.

Author Contributions

Conceptualization, S.A., A.A., A.B., M.S., A.G. and S.W.; methodology, S.A., A.A., A.B., M.S., A.G. and S.W.; software, S.A., A.A., A.B., M.S., A.G. and S.W.; validation, S.A., A.A., A.B., M.S., A.G. and S.W.; formal analysis, S.A., A.A., A.B., M.S., A.G. and S.W.; investigation, S.A., A.A., A.B., M.S., A.G. and S.W.; resources, S.A., A.A., A.B., M.S., A.G. and S.W.; data curation, S.A., A.A., A.B., M.S., A.G. and S.W.; writing—original draft preparation, S.A., A.A., A.B., M.S., A.G. and S.W.; writing—review and editing, S.A., A.A., A.B., M.S., A.G. and S.W.; visualization, S.A., A.A., A.B., M.S., A.G. and S.W.; supervision, S.A., A.A., A.B., M.S., A.G. and S.W.; project administration, S.A., A.A. and A.B.; funding acquisition, S.A., A.A., A.B., M.S. and A.G. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

Not applicable.

Acknowledgments

The authors extend their appreciation to the Deputyship for Research & Innovation, Ministry of Education, in Saudi Arabia for funding this research work through the project number (IF2/PSAU/2022/01/22904).

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Friedrich, L.; Maziero, J. Evolution strategies: Application in hybrid quantum-classical neural networks. Quantum Inf. Process. 2023, 22, 132. [Google Scholar] [CrossRef]
  2. Date, P. Quantum Discriminator for Binary Classification. arXiv 2020, arXiv:2009.01235. [Google Scholar]
  3. Garg, S.; Ramakrishnan, G. Advances in Quantum Deep Learning: An Overview. arXiv 2020, arXiv:2005.04316. [Google Scholar]
  4. Beer, K.; Bondarenko, D.; Farrelly, T.; Osborne, T.J.; Salzmann, R.; Scheiermann, D.; Wolf, R. Training deep quantum neural networks. Nat. Commun. 2020, 11, 808. [Google Scholar] [CrossRef]
  5. Zhang, K.; Hsieh, M.-H.; Liu, L.; Tao, D. Toward Trainability of Quantum Neural Networks. arXiv 2020, arXiv:2011.06258. [Google Scholar]
  6. Li, Y.; Xiao, J.; Chen, Y.; Jiao, L. Evolving deep convolutional neural networks by quantum behaved particle swarm optimization with binary encoding for image classification. Neurocomputing 2019, 362, 156–165. [Google Scholar] [CrossRef]
  7. Konar, D.; Sarma, A.D.; Bhandary, S.; Bhattacharyya, S.; Cangi, A.; Aggarwal, V. A shallow hybrid classical-quantum spiking feedforward neural network for noise-robust image classification. Appl. Soft Comput. 2023, 136, 110099. [Google Scholar] [CrossRef]
  8. Obaid, M.A.; Jasim, W.M. Pre-convoluted neural networks for fashion classification. Bull. Electr. Eng. Inform. 2021, 10, 750–758. [Google Scholar] [CrossRef]
  9. Li, Y.; Zhou, R.-G.; Xu, R.; Luo, J.; Hu, W. A quantum deep convolutional neural network for image recognition. Quantum Sci. Technol. 2020, 5, 044003. [Google Scholar] [CrossRef]
  10. Mangini, S.; Tacchino, F.; Gerace, D.; Macchiavello, C.; Bajoni, D. Quantum computing model of an artificial neuron with continuously valued input data. Mach. Learn. Sci. Technol. 2020, 1, 045008. [Google Scholar] [CrossRef]
  11. Easom-McCaldin, P.; Bouridane, A.; Belatreche, A.; Jiang, R.; Al-Maadeed, S. Efficient Quantum Image Classification Using Single Qubit Encoding. IEEE Trans. Neural Netw. Learn. Syst. 2022, 1–15. [Google Scholar] [CrossRef] [PubMed]
  12. Rather, S.A.; Aravinda, S.; Lakshminarayan, A. Creating ensembles of dual unitary and maximally entangling quantum evolutions. Phys. Rev. Lett. 2020, 125, 070501. [Google Scholar] [CrossRef] [PubMed]
  13. Sergioli, G.; Giuntini, R.; Freytes, H. A new quantum approach to binary classification. PLoS ONE 2019, 14, e0216224. [Google Scholar] [CrossRef] [PubMed]
  14. Jiang, W.; Xiong, J.; Shi, Y. Can Quantum Computers Learn Like Classical Computers? A Co-Design Framework of Machine Learning and Quantum Circuits. Res. Sq. 2020. [Google Scholar] [CrossRef]
  15. Stein, S.A.; Baheri, B.; Chen, D.; Mao, Y.; Guan, Q.; Li, A.; Xu, S.; Ding, C. QuClassi: A Hybrid Deep Neural Network Architecture based on Quantum State Fidelity. arXiv 2021, arXiv:2103.11307. [Google Scholar]
  16. Stein, S.A. Quantum Computing Aided Machine Learning Through Quantum State Fidelity. Preprints.org 2021, 2021030583. [Google Scholar]
  17. Maheshwari, D.; Sierra-Sosa, D.; Garcia-Zapirain, B. Variational quantum classifier for binary classification: Real vs. synthetic dataset. IEEE Access 2021, 10, 3705–3715. [Google Scholar] [CrossRef]
  18. Yang, R.; Bosch, S.; Kiani, B.; Lloyd, S.; Lupascu, A. An analog quantum variational embedding classifier. arXiv 2022, arXiv:2211.02748. [Google Scholar] [CrossRef]
  19. Li, W.; Chu, P.-C.; Liu, G.-Z.; Tian, Y.-B.; Qiu, T.-H.; Wang, S.-M. An Image Classification Algorithm Based on Hybrid Quantum Classical Convolutional Neural Network. Quantum Eng. 2022, 2022, 5701479. [Google Scholar] [CrossRef]
  20. Hellstern, G. Analysis of a hybrid quantum network for classification tasks. IET Quantum Commun. 2021, 2, 153–159. [Google Scholar] [CrossRef]
  21. Beikmohammadi, A.; Zahabi, N. A Hierarchical Method for Kannada-MNIST Classification Based on Convolutional Neural Networks. In Proceedings of the 2021 26th International Computer Conference, Computer Society of Iran (CSICC), Tehran, Iran, 3–4 March 2021. [Google Scholar]
  22. Singh, G.; Kaur, M.; Singh, M.; Kumar, Y. Implementation of quantum support vector machine algorithm using a benchmarking dataset. Indian J. Pure Appl. Phys. (IJPAP) 2022, 60, 407–414. [Google Scholar]
  23. Hur, T.; Kim, L.; Park, D.K. Quantum convolutional neural network for classical data classification. Quantum Mach. Intell. 2022, 4, 3. [Google Scholar] [CrossRef]
  24. Bharti, K.; Cervera-Lierta, A.; Kyaw, T.H.; Haug, T.; Alperin-Lea, S.; Anand, A.; Degroote, M.; Heimonen, H.; Kottmann, J.S.; Menke, T.; et al. Noisy intermediate-scale quantum algorithms. Rev. Mod. Phys. 2022, 94, 015004. [Google Scholar] [CrossRef]
  25. Krzysztof, K.; Mateusz, S.; Marek, S.; Rafał, R. Applying a quantum annealing based restricted Boltzmann machine for mnist handwritten digit classification. CMST 2021, 27, 99–107. [Google Scholar]
  26. Kerenidis, I.; Luongo, A. Classification of the MNIST data set with quantum slow feature analysis. Phys. Rev. A 2020, 101, 062327. [Google Scholar] [CrossRef]
  27. Chen, S.Y.-C.; Huang, C.-M.; Hsing, C.-W.; Kao, Y.-J. Hybrid quantum-classical classifier based on tensor network and variational quantum circuit. arXiv 2020, arXiv:2011.14651. [Google Scholar]
  28. Chen, S.Y.-C.; Huang, C.-M.; Hsing, C.-W.; Kao, Y.-J. An end-to-end trainable hybrid classical-quantum classifier. Mach. Learn. Sci. Technol. 2021, 2, 045021. [Google Scholar] [CrossRef]
Figure 1. Overall process of the proposed system.
Figure 1. Overall process of the proposed system.
Mathematics 11 02513 g001
Figure 2. Quantum-based neuron model.
Figure 2. Quantum-based neuron model.
Mathematics 11 02513 g002
Figure 3. Proposed quantum model: an illustrative representation.
Figure 3. Proposed quantum model: an illustrative representation.
Mathematics 11 02513 g003
Figure 4. Overview of the 4-qubit framework.
Figure 4. Overview of the 4-qubit framework.
Mathematics 11 02513 g004
Figure 5. MNIST test image prior to downscaling.
Figure 5. MNIST test image prior to downscaling.
Mathematics 11 02513 g005
Figure 6. MNIST image after downscaling.
Figure 6. MNIST image after downscaling.
Mathematics 11 02513 g006
Figure 7. Fashion MNIST image prior to normalization.
Figure 7. Fashion MNIST image prior to normalization.
Mathematics 11 02513 g007
Figure 8. Fashion MNIST image after normalization.
Figure 8. Fashion MNIST image after normalization.
Mathematics 11 02513 g008
Figure 9. Analysis with regard to hinge accuracy for the MNIST dataset.
Figure 9. Analysis with regard to hinge accuracy for the MNIST dataset.
Mathematics 11 02513 g009
Figure 10. Analysis with regard to hinge loss for the MNIST dataset.
Figure 10. Analysis with regard to hinge loss for the MNIST dataset.
Mathematics 11 02513 g010
Figure 11. Analysis with regard to model accuracy for the MNIST dataset.
Figure 11. Analysis with regard to model accuracy for the MNIST dataset.
Mathematics 11 02513 g011
Figure 12. Analysis with regard to model loss for the MNIST dataset.
Figure 12. Analysis with regard to model loss for the MNIST dataset.
Mathematics 11 02513 g012
Figure 13. Analysis with regard to hinge accuracy for the Fashion MNIST dataset.
Figure 13. Analysis with regard to hinge accuracy for the Fashion MNIST dataset.
Mathematics 11 02513 g013
Figure 14. Analysis with regard to hinge loss for the Fashion MNIST dataset.
Figure 14. Analysis with regard to hinge loss for the Fashion MNIST dataset.
Mathematics 11 02513 g014
Figure 15. Analysis with regard to model accuracy for the Fashion MNIST dataset.
Figure 15. Analysis with regard to model accuracy for the Fashion MNIST dataset.
Mathematics 11 02513 g015
Figure 16. Analysis with regard to model loss for the Fashion MNIST dataset.
Figure 16. Analysis with regard to model loss for the Fashion MNIST dataset.
Mathematics 11 02513 g016
Figure 17. Analysis with regard to hinge accuracy.
Figure 17. Analysis with regard to hinge accuracy.
Mathematics 11 02513 g017
Figure 18. Analysis with regard to hinge loss.
Figure 18. Analysis with regard to hinge loss.
Mathematics 11 02513 g018
Figure 19. Comparative analysis with regard to training and testing accuracy for the MNIST dataset.
Figure 19. Comparative analysis with regard to training and testing accuracy for the MNIST dataset.
Mathematics 11 02513 g019
Figure 20. Comparative analysis with regard to training and testing loss for the MNIST dataset.
Figure 20. Comparative analysis with regard to training and testing loss for the MNIST dataset.
Mathematics 11 02513 g020
Figure 21. Comparative analysis with regard to training and testing accuracy for the Fashion MNIST dataset.
Figure 21. Comparative analysis with regard to training and testing accuracy for the Fashion MNIST dataset.
Mathematics 11 02513 g021
Figure 22. Comparative analysis with regard to training and testing loss for the Fashion MNIST dataset.
Figure 22. Comparative analysis with regard to training and testing loss for the Fashion MNIST dataset.
Mathematics 11 02513 g022
Figure 23. Accuracy analysis compared with MPS-VQC hybrid model [28].
Figure 23. Accuracy analysis compared with MPS-VQC hybrid model [28].
Mathematics 11 02513 g023
Table 1. Learning parameters of QNN.
Table 1. Learning parameters of QNN.
Number of quantum bits4
Total layers in quantum neural network (QNN)6
Learning rate0.001
OptimizerAdam
Batch size128
Epochs8
Table 2. Analysis with regard to hinge accuracy and hinge loss.
Table 2. Analysis with regard to hinge accuracy and hinge loss.
DatasetHinge Accuracy
Value
Hinge Accuracy ValueHinge Loss
Value
Hinge Loss Value
TrainingValidationTrainingValidation
Fashion MNIST0.99820.99960.01510.0101
MNIST0.94470.95970.2320.1655
Table 3. Comparative analysis with regard to accuracy and loss for the Fashion MNIST dataset.
Table 3. Comparative analysis with regard to accuracy and loss for the Fashion MNIST dataset.
ModelTest AccuracyTest Loss
PCA-VQC2 [28]82.10%0.5882
PCA-VQC4 [28]85.35%0.4806
MPS-VQC2 [28]95.55%0.358
MPS-VQC4 [28]96.05%0.3561
Proposed99.32%0.0157
Table 4. Analysis with regard to accuracy.
Table 4. Analysis with regard to accuracy.
ModelAccuracy
MPS-VQC hybrid model [28]96%
Proposed99.32%
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Alsubai, S.; Alqahtani, A.; Binbusayyis, A.; Sha, M.; Gumaei, A.; Wang, S. A Quantum Computing-Based Accelerated Model for Image Classification Using a Parallel Pipeline Encoded Inception Module. Mathematics 2023, 11, 2513. https://doi.org/10.3390/math11112513

AMA Style

Alsubai S, Alqahtani A, Binbusayyis A, Sha M, Gumaei A, Wang S. A Quantum Computing-Based Accelerated Model for Image Classification Using a Parallel Pipeline Encoded Inception Module. Mathematics. 2023; 11(11):2513. https://doi.org/10.3390/math11112513

Chicago/Turabian Style

Alsubai, Shtwai, Abdullah Alqahtani, Adel Binbusayyis, Mohemmed Sha, Abdu Gumaei, and Shuihua Wang. 2023. "A Quantum Computing-Based Accelerated Model for Image Classification Using a Parallel Pipeline Encoded Inception Module" Mathematics 11, no. 11: 2513. https://doi.org/10.3390/math11112513

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop