Next Article in Journal
A Convolution Neural Network-Based Representative Spatio-Temporal Documents Classification for Big Text Data
Next Article in Special Issue
Research on Emotion Recognition for Online Learning in a Novel Computing Model
Previous Article in Journal
Microencapsulation of Natural Food Antimicrobials: Methods and Applications
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Predictive Model for Student Achievement Using Spiking Neural Networks Based on Educational Data

School of Information Engineering, Shenyang University, Shenyang 110044, China
*
Author to whom correspondence should be addressed.
Appl. Sci. 2022, 12(8), 3841; https://doi.org/10.3390/app12083841
Submission received: 11 March 2022 / Revised: 4 April 2022 / Accepted: 4 April 2022 / Published: 11 April 2022
(This article belongs to the Special Issue Artificial Intelligence in Online Higher Educational Data Mining)

Abstract

:
Student achievement prediction is one of the most important research directions in educational data mining. Student achievement directly reflects students’ course mastery and lecturers’ teaching level. Especially for the achievement prediction of college students, it not only plays an early warning and timely correction role for students and teachers, but also provides a method for university decision-makers to evaluate the quality of courses. Based on the existing research and experimental results, this paper proposes a student achievement prediction model based on evolutionary spiking neural network. On the basis of fully analyzing the relationship between course attributes and student attributes, a student achievement prediction model based on spiking neural network is established. The evolutionary membrane algorithm is introduced to learn hyperparameters of the model, so as to improve the accuracy of the model in predicting student achievement. Finally, the proposed model is used to predict student achievement on two benchmark student datasets, and the performance of the prediction model proposed in this paper is analyzed by comparing with other experimental algorithms. The experimental results show that the model based on spiking neural network can effectively improve the prediction accuracy of student achievement.

1. Introduction

The education system contains a large amount of educational data, such as universities and training centers. How to mine the hidden knowledge of these data helps decision makers within the higher education system to improve the quality of education in the process of student development [1]. With the development of society and the popularization of higher education, the number of college students is increasing. It is difficult for teachers to track the learning situation of each student, which affects the quality of teaching and learning to a certain extent. This leads to a certain number of students in colleges and universities failing examinations, repeating grades or even dropping out every year, which seriously affects the future development of students. The quality of student training has gradually become a new focus in the field of higher education. Student achievement is one of the key factors that most directly reflects the quality of student training in higher education [2]. Therefore, it has important application value and practical significance to study and construct an efficient prediction method about student achievement.
With the vigorous development of artificial intelligence and big data technology, it is possible to model and analyze the massive data accumulated by colleges and universities for many years. Big data technologies represented by deep learning and data mining can discover some data patterns, extract valuable information and knowledge, and provide services for solving problems in various fields, which has become the consensus of today’s industry and academia. At present, technologies such as big data have been widely used in many fields such as finance, medical care, e-commerce, energy and manufacturing, and transportation. Especially in the field of education, more and more big data technologies and statistical methods are used to assist intelligent education systems. These technologies can perform intelligent analysis and processing of data in order to fully obtain valuable knowledge hidden in data and assist decision-making. They not only helps managers make more “informed” decisions, but also better analyzes the quality of student training. That is to say, how to use big data technology to dig out the laws of general significance from the massive educational data, so as to help decision makers in the education system to clearly understand the quality of talent training.
Educational data mining is an interdisciplinary concept that combines computer science, statistics, and pedagogy [3]. The education system contains a large amount of data, and it is difficult to analyze and count these data with traditional methods [4,5]. Many scholars have begun to use technologies such as big data to analyze educational data. The application of big data technology in educational data mining reduces the workload of data analysis, and can analyze the general patterns of the student training process from a global perspective, which is beneficial for decision makers to manage and supervise the entire training process more scientifically. Among them, one of the hotspots of educational data mining is the research on predicting students achievement [6,7,8]. Except for accurately predicting student grades, there are studies on whether students will pass exams or graduate. Cortez et al. presented the business intelligence/data mining techniques to process student achievement in secondary education and offer interesting automated tools to aid the field of education. The technology aims to extract advanced knowledge from raw data [9]. Ramesh et al. used NET framework to design a multi-layer perception algorithm to predict student performance [10]. Arora et al. proposed a fuzzy probabilistic neural network model for building the academic profile of students. The model can predict a student’s academic performance based on some qualitative observations of the student [11]. Ezz et al. introduced an adaptive recommendation system to predict student’s academic performance in the faculty of engineering at AL-Azhar University, which can recommend the best machine learning algorithm for each faculty department to predict student’s academic performance [12]. Pimentel et al. proposed some theoretical concepts about the support vector regression machine learning method with a focus on improving the computational efficiency of their national high school exams for online test [13]. Yousafzai et al. implemented a Bidirectional Long Short-Term Memory (BiLSTM) network based on improved feature selection to efficiently predict student performance from historical data. The simulation results show that the proposed method has achieved a prediction accuracy of 90.16% [14].
The above experiences are very meaningful for improving the prediction accuracy of student achievement. These models have been successfully applied to student performance prediction, but there are still some deficiencies in the training and design stages of the prediction model. For example, the law analysis of the educational data itself is insufficient, and the prediction accuracy of the model needs to be further improved, and so on [1,15,16]. Aiming at the above problems, this paper mainly explores a student achievement prediction model based on spiking neural network. Based on our previous work [17,18], this paper aims to study learning prediction models based on spiking neural networks to further improve the accuracy of student achievement prediction. Subsequently, the proposed model was applied to the two student grade prediction datasets of Alibaba Cloud Tianchi’s xAPI-Edu-Data and UCI machine learning library. The proposed prediction model is compared with state-of-the-art models to verify its effectiveness. The experimental results show that the proposed model outperforms these experimental models in classification accuracy.
The main contributions of this paper can be summarized as follows.
  • Based on the analysis of educational data, an educational data mining model is discussed.
  • Spiking neural network is used for the first time to predict student achievement.
  • Evolutionary spiking neural network is designed and implemented on the basis of the student datasets.
  • Simulation results verify the effectiveness of the proposed model in predicting student achievement.
  • The research results of the proposed model can provide more targeted reference for scientific research and education management workers.
The remainder of this paper is summarized as follows. Section 2 discusses the current research state in educational data mining and spiking neural networks. Section 3 describes the proposed model in detail, especially how to design educational data mining models based on evolving spiking neural networks. In Section 4, the proposed model is compared with the state-of-the-art models, and the simulation results are evaluated and discussed on the xAPI-Edu-Data and student-grade-prediction datasets, respectively. Finally, Section 5 summarizes the conclusions of this paper.

2. Related Works

2.1. Research Status of Educational Data Mining

As a multidisciplinary research field such as education, computer science, statistics, and psychology, educational data mining has rich research content, diverse methods, and diverse research perspectives. The analysis and research of educational data is of great significance to scientifically and effectively improve the quality of education and the level of teaching management [16]. A lot of in-depth work has been done by researchers who have long been engaged in educational data analysis. These research works mainly focus on student achievement prediction, student modeling, learning recommendation, analysis and visualization, etc. [19]. These research directions on educational data mining are discussed below.
Student achievement prediction uses relevant information about students to predict their future academic performance [20]. A prediction task can be either a classification task or a regression task, such as predicting the probability of a student failing, predicting a student’s grade rank, predicting a student achievement in a course, etc.
Student modeling is to reveal the learning characteristics of students by building models of student behavior, learning strategies and cognitive abilities [21]. For example, identifying students’ behavior and finding out the relationship between students’ learning behavior patterns and their personality traits.
Learning recommendation system is a technology that recommends courses, learning materials, learning methods or professional directions to students based on their personality characteristics and academic performance [22]. For example, recommend suitable majors for students based on their academic performance in their first year of admission, and recommend the order of course content to students based on their log records and personal information in the learning system.
Analysis visualization refers to the use of visualization technologies such as histograms, polylines, heat maps, and word clouds to visualize the knowledge or information contained in educational data, so that people can obtain information faster and understand information more conveniently and intuitively.
In the above research work, student achievement prediction is one of the important research branches in the field of educational data mining. Scholars have carried out a lot of fruitful research work, but the research on the prediction of student achievement by introducing artificial intelligence or big data technology is still in the exploratory stage. Traditional statistical methods are difficult to effectively predict student achievement. Therefore, this paper chooses student achievement prediction as the main research content, in order to contribute to the in-depth research in this direction.

2.2. Spiking Neural Network

Spiking neural network is called the third-generation artificial neural network, which is a discrete bionic network model that is closest to real life. Traditional artificial neural networks are still limited to the von Neumann architectures for information processing and learning [23]. In the von Neumann architecture, the memory and the processor are separated from each other, resulting in a large amount of energy being consumed when performing massive data operations [24]. However, spiking neural network encodes information into spike sequences for processing, that is, the input information is processed and transmitted through direct action potentials at synapses. It adopts mechanisms such as plastic synapses and spiking time coding to simulate the spatiotemporal properties of neural networks, and its structure is closer to biological neurons. Therefore, it has higher biological feasibility and more computational power than traditional neural networks [25].
The theory of spiking neural network can find the corresponding biological basis, which makes it have both good biological interpretability and the information in the time dimension. Compared with traditional neural networks, spiking neural network has more efficient computing power and is relatively easy to implement in hardware. Because their event-driven nature can reduce the power consumption of operations [26]. The excellent performance and advantages of spiking neural networks have attracted a large number of internationally renowned teams for long-term in-depth research [27,28]. The main research directions of spiking neural network are discussed as follows.
  • Neurons. Biological neurons generally simulate their functions through neuron models. The neuron model is the basis for building spiking neural network. Different types of neurons are connected to each other to form various types of neural network models. Common spiking neuron models include Izhikevich model, HH (Hodgkin and Huxley model), LIF (Leaky Integrate and Fire Model), SRM (spike response model), etc. Among them, LIF and SRM are the most commonly used learning algorithms. Many learning algorithms are designed and implemented on the basis of these neuron models or their variants.
  • Network Topology. The topology of a neural network includes the number of network layers, the number of neurons in each layer, and the way each neuron connected to each other. The topology of artificial neural network is often divided into input layer, hidden layer and output layer, and each layer is connected in sequence. Among them, the neurons of input layer are responsible for receiving input information from the outside world and passing it to the neurons of hidden layer. The hidden layer is responsible for information processing and information transformation within the neural network. Usually, the hidden layer is designed as one or more layers according to the needs of transformation. Like the topology of traditional artificial neural network, the structure of spiking neural network mainly includes feedforward spiking neural network, recurrent spiking neural network and hybrid spiking neural network, etc.
  • Spike sequence encoding. For the encoding of input information, researchers have proposed a variety of spike sequence encoding methods for spiking neural networks by learning the information encoding mechanism of biological neurons. For example, the first spike-triggered time coding method, the delayed phase coding method, the population coding method, etc.
  • Learning algorithm. Spiking neural network contains hyperparameters such as network topology, number of neurons, and weights. During training network, these hyperparameters are determined by a learning algorithm. The learning algorithm directly determines the output accuracy of the spiking neural network. Therefore, scholars have carried out a lot of research on the learning algorithm of spiking neural network, and the research directions mainly focus on unsupervised learning, supervised learning, semi-supervised learning and reinforcement learning.
As the third generation of neural networks, spiking neural networks have great computational potential. Due to the complex structure of neurons and the large number of hyperparameters, there are few application scenarios. By reviewing the literature, it is found that the application of spiking neural network is rare in educational data mining. Therefore, this paper will try to use spiking neural network to predict student achievement. The research results of this paper in educational data mining can expand the application scope of spiking neural network.

3. Proposed Method

In order to better obtain and analyze educational data, we first give the design scheme of student achievement prediction model. On this basis, we propose a student achievement prediction model based on spiking neural network for educational data mining.

3.1. Scheme of Student Achievement Prediction

The design scheme of the student achievement prediction model is discussed for details, as shown in Figure 1. This scheme is a general framework for student performance prediction. It is not limited to using the model proposed in this paper, other models can also be used. As can be seen in Figure 1, the scheme mainly consists of five parts, including datasets, data preprocessing, data extraction, data modeling and application. Next, each part in Figure 1 is described respectively.
  • Datasets consist of raw data from databases, documents, or the website. Research on student achievement prediction has focused on education and psychology, using data mostly from questionnaires or student self-reports [29]. Generally, the acquisition of this kind of data should first understand data structure and meaning of the original student achievement data involved in the task, and determine the required data items and data extraction principles. Finally, the extraction of relevant student data is completed using appropriate means and strict operating specifications. The above process involves more relevant professional knowledge. We can try to combine the arguments of experts and users to obtain variables that are highly correlated with student performance. If the extraction of multi-source data is involved in the acquisition process, due to the different software and hardware platforms, it is necessary to pay attention to the connection of the data sources of these heterogeneous databases and the conversion of data formats. If the confidentiality of student data is involved, more attention should be paid to the operation of such relevant data during processing, and remarks should be made on the relevant data for reference. The study found that the possible reason why the prediction accuracy of the model could not be improved was caused by the quality of the data source. In the acquisition of raw data, it is particularly important to minimize errors and avoid mistakes from the source, especially to reduce human errors. Currently, the main sources of datasets on student achievement prediction are education management systems, offline datasets of educational history, and standardized test datasets.
  • Data preprocessing needs to complete data cleaning, data integration, data transformation and other operations. In the whole data mining process, data preprocessing takes about 60% of the time, and the subsequent mining work only accounts for about 10% of the total workload. The preprocessed data can not only save a lot of space and time, but also help the predictive model to make better decisions and predictions. Due to the different sources of educational data, the attributes and feature dimensions of student data are inconsistent. In order to obtain better quality modeling data, certain data cleaning, integration and transformation must be performed.
    Among them, data cleaning is the most time-consuming and tedious, but it is the most important step in the data preparation process. This step can effectively reduce the problem of conflict situations that may arise during the learning process. The raw data with conditions of noise, error, missing and redundant can be processed as follows.
    -
    Noise data. Data smoothing techniques are the most widely used methods for dealing with such noisy data [30].
    -
    Error data. For some wrong data tuples, we change, delete or ignore these wrong data by analyzing the datasets.
    -
    Missing data. We use global constants or mean values of attributes to fill nulls, and use regression methods or use derivation-based Bayesian methods or decision trees to fix certain attributes of the data [31].
    -
    Redundant data. We will remove redundant parts of the data to improve the processing speed of the prediction model.
    Data integration is a data storage technology and process that combines data from different data sources such as databases, networks, or public files. Since the data integration of different disciplines involves different theoretical foundations and rules, data integration can be said to be a difficult point in data preprocessing. Naming rules and requirements for each data source may be inconsistent. To extract data from multiple data sources into a database, all data formats must be unified in order to ensure the accuracy of the experimental results. Generally, each data source needs to be modified according to a unified standard, and then the data of different data sources can be uniformly extracted into the same database.
    Data transformation is the use of linear or nonlinear mathematical transformation methods to compress multi-dimensional data into fewer dimensional data and eliminate their differences in characteristics such as space, attributes, time and precision. While these methods are usually lossy on the original data, the results tend to have greater utility. To a certain extent, the original data after data transformation operation makes the prediction model to have better prediction accuracy and execution efficiency.
  • Data extraction is to divide the data into training dataset and test dataset. The training dataset refers to building a classifier by matching some parameters to a dataset of learning samples. On the training dataset, the learning method is used to determine the hyperparameters of the model. That is, let the training model build a prediction method based on the training dataset. After training the model, the test dataset is mainly employed to evaluate the discriminative ability and generalization ability of the model.
  • Data modeling addresses two main types of forecasting problems, including classification and numerical prediction. Classification and prediction are two ways of using data to make predictions that can be used to determine future outcomes. Classification is used to predict discrete categories of data objects, and the attribute values that need to be predicted are discrete and disordered. Numerical prediction is used to predict the continuous value of data objects, and the attribute values that need to be predicted are continuous and ordered. The classification data model reflects how to find out the characteristic knowledge of the common nature of similar things and the difference characteristic knowledge between different things. Classification is to build a classification model through guided learning training, and use the model to classify samples of unknown classification. A predictive model is similar to a classification model and can be viewed as a map or function y = f ( x ) , where x is the input tuple and the output y is a continuous or ordered value. Unlike the classification algorithm, the attribute values that the prediction algorithm needs to predict are continuous and ordered.
  • Application refers to applying the above process to classification or prediction to solve practical problems. The specific application of the data model in this paper is for educational data mining. More specifically, the proposed model is applied to solve the problems of student achievement prediction.

3.2. Student Achievement Prediction Model

Figure 2 shows a student achievement prediction model based on an evolutionary spiking neural network. First, the processed modeling data is divided into a training dataset and a test dataset according to a certain proportion. Next, the evolutionary spiking neural network model is proposed for the student achievement prediction. Then, the proposed model is trained using the training dataset, and the evolutionary membrane algorithm is used to optimize the hyperparameters of the proposed model to obtain the best output performance. A test dataset is chosen to evaluate the performance of the proposed model. On the test dataset, the prediction effect of the proposed model is analyzed by some evaluation indicators.
As can be seen in Figure 2, the working mode of the spiking neural network proposed in this paper can be roughly divided into some parts, including the encoding of the input data, the training and learning of the spiking neural network, and the decoding and output of the predicting results. Each part of the proposed is discussed as follows.
  • We take the student dataset as the input of the proposed model, and encode these data as the input spike sequence using the first spike encoding [32].
  • The input spike sequence is passed to neurons for transmission and processing, and then the learning rate and synaptic time delay are optimized using evolutionary membrane algorithm to achieve adaptive tuning the hyperparameters of the proposed model.
  • The processing result of the neuron is passed to the output layer. The output layer outputs the predicted spike sequence, and calculates the mean squared error between the actual spike sequence and the expected output spike sequence.
  • The model adjusts the learning rate and synaptic time delay by continuously calling the evolutionary membrane algorithm to reduce its prediction error value. Until the mean squared error is smaller than a certain limit or the number of iterations satisfies the requirement of stop learning, the proposed model outputs a spike sequence and decodes this sequence into a prediction result.
For a deeper understanding of the proposed model, a flowchart of the proposed student achievement model based on evolutionary spiking neural network is shown in Figure 3.
Figure 3 specifically shows the working process of the entire model. This model first determines the characteristic data that affects student performance as the input sequence of the spiking neural network, and uses the first coding method to convert these data into input spiking sequences and expected spiking sequences. On this basis, the model passes the input spiking sequence to the neuron of the spiking neural network, and the data is learned and trained in the neuron. Then, the final data learned by the neuron is passed to the output layer, and the meaning square error between the actual spiking sequences and the expected spiking sequences is calculated. Next, it is necessary to compare the error value with the error standard or determine whether the number of iterations meets the ending requirements. The evolutionary membrane algorithm is used to adjust the learning rate and synaptic time delay of the neuron, and this process is executed cyclically until the error value meets the requirements, or the number of iterations satisfies the ending condition. The evolutionary membrane algorithm used in this paper consists of objects, membrane structures, and reaction rules. The object represents the hyperparameters of the spiking neural network. The membrane structure is a two-layer structure composed of a skin membrane and several membranes. The chemical reaction optimization algorithm as reaction rules evolve these objects in the membrane. Then, evolutionary membrane algorithm evaluate each object in the membrane, and compare the fitness values of these objects. The chemical reaction optimization is an operator and generates the offspring objects, and ensures the diversity of candidate objects and reduces the probability of "premature" phenomenon in the evolution process to a certain extent. The above process to continuously evolve the next generation of spiking neural network is repeated until the end condition is met. When the process is terminated, the best spiking neural network model is determined to predict student achievement. Finally, if the output error reaches the required error value range or the number of iterations satisfies the conditions, which means that the adjustment of learning rate and synaptic time delay is completed. The actual output spiking sequence is attained, and the sequence is transformed as the prediction results.

4. Experimental Studies

To analyze the performance of the proposed prediction model based on evolutionary spiking neural network, several state-of-the-art experimental algorithms were selected for comparison on the student datasets, such as logistic regression, decision tree, XGBoost, AdaBoost, neural network, and support vector machine(SVM). First, we describe the benchmark dataset and some evaluation metrics used in our experiments. Second, we provide the comparison results between the proposed algorithm and the experimental algorithm, and further analyze the performance of the proposed algorithm. Finally, the comparison results of the experimental algorithms under the benchmark dataset are discussed.

4.1. Benchmark Datasets and Evaluation Indicators

4.1.1. Benchmark Datasets

Two benchmark datasets are used here to verify the performance of the proposed algorithm. Details of these datasets including xAPI-Edu-Data and UCI datasets are discussed below.
The first classification dataset is xAPI-Edu-Data from Alibaba Cloud Tianchi, which contains 17 variables related to student grades. More specifically, the dataset contains 480 student records from two semesters of different countries and genders. There are 17 information attribute including current education level, class, courses selected, students raising their hands in class, attendance characteristics, and parents of students, etc. Student grades are divided into three categories [‘L’, ‘M’, ‘H’], which will serve as the criteria for judging student. Among them, “L” (0–59) means failing, “M” (60–89) means medium, and “H” (90–100) means high score. We analyzed the proportion of students in the three categories of grades, as shown in Figure 4. As can be seen in Figure 4, we found that most of the students were in the middle grades, and the high-scoring students accounted for 29.58% of the total number.
The dataset contains 480 student records and is divided into training dataset and test dataset. Among them, the training dataset consists of 384 items, accounting for 80% of the total number of samples. The test dataset accounts for 20% of the total number of samples with a total of 96 entries.
Another categorical dataset is the student performance dataset from the UCI Machine Learning Repository, which consists of 395 student records in mathematics subjects and contains 33 variables related to student achievement. The dataset was collected from two Portuguese schools in the 2005–2006 school year, using school reports and questionnaires. These factors affect final student performance in the dataset, including demographic metrics(e.g., mother’s education, household income), social/emotional associations(e.g., alcohol consumption), and school-related expectations (e.g., number of exam failures). The dataset is used under the binary/five-level classification and regression tasks in Table 1.
The raw values in Table 1 are the true values in the dataset. The classifications in Table 1 refer to the 5 values of the five-level classification system on a 20-point scale. The codes in Table 1 represent 5 types of values, which is convenient for programming.
We analyzed the proportion of students with different scores in the student performance dataset. As can be seen from Figure 5, the overall pass rate of the students is not bad. The students with excellent grades account for 6.09% of the total number, but 46.95% of them also fail.
The dataset is composed of 395 student records, which are divided into training dataset and test dataset. Among them, the training dataset consists of 316 entries, accounting for 80% of the total number of samples. The test dataset has a total of 79 items, accounting for 20% of the total samples.
To find the best prediction model for student achievement, the training dataset is used to optimize the hyperparameters in the model. The test dataset is used to provide the best solution as the final output of the proposed model, and demonstrate the performance of the proposed model.

4.1.2. Evaluation Indicators

Three evaluation metrics including Precision, Recall, and Accuracy are used to evaluate the performance of all experimental algorithms. In multi-label classification, these metrics are used as the proportion of predicted results that exactly match the corresponding ground-truth results. The detailed definitions of these indicators are described below.
Assuming a binary classification problem, the samples have two categories including positive and negative. Then there are 4 combinations of model prediction results, namely True Positive(TP), False Positive(FP), False Negative(FN), True Negative(TN), as shown in Table 2. TP indicates that a sample is a positive class and the predicted class is also positive. If the sample is of negative class and the predicted class is positive, it is called FP. Correspondingly, if a positive class is predicted as a negative class and the sample is a negative class, called FN. Samples of the negative class are predicted as the negative class, called TN.
Precision represents the ratio of positive to true samples predicted by the model, as shown in Equation (1).
P r e c i s i o n = T P T P + F P
Recall indicates how many samples with positive labels are correctly predicted by the model, as shown in Equation (2). Unlike Equation (1), the two evaluation metrics differ only in the denominator.
R e c a l l = T P T P + F N
Accuracy represents the ratio of the samples predicted by the model to the true value, as shown in Equation (3). The denominator in Equation (3) is always the total number of samples, and the numerator is the number of values predicted by the model equal to the true value. It is easy to extend to multi-class cases, such as 10-class. The numerator here is the sum of all classes whose predicted value equals the actual value.
A c c u r a c y = T P + T N T P + T N + F P + F N

4.1.3. Experimental Conditions

All experimental algorithms are compared with the proposed algorithm, including logistic regression, decision tree, XGBoost, AdaBoost, neural network, SVM. The solution performance of all experimental algorithms is calculated according to the above indicators.
Simulations for all experiments were run on a Windows 10 Pro host with dual Intel Xeon Platinum 8160 processors (33 M cache, 2.10 GHz) and 160 GB physical RAM. The processor is composed of two Xeon CPUs containing 48 parallel cores and 96 threads. All experimental algorithms are implemented in the Pycharm community using Python 3.10.

4.2. Comparing the Results of All Experimental Algorithms

The proposed model is compared with other experimental models on the above two datasets. To further illustrate their differences, three evaluation metrics are used to evaluate these algorithms. The experimental results on the two datasets are discussed in detail below.

4.2.1. Comparing Results with All Experimental Algorithms on the xAPI-Edu-Data Datasets

xAPI-Edu-Data is chosen to test the performance of all experimental models. To ensure that all experimental models perform successfully, the xAPI-Edu-Data dataset is analyzed below. The data content of xAPI-Edu-Data is first visualized, which can intuitively understand the characteristics of the dataset.
Figure 6 shows the relationship between student performance and gender for this dataset.
It can be clearly seen that the number of female students who fail is far less than the number of male students. Almost the same number of female students achieved “M” and “H”, while the number of male and female students on “H” was not much different. But on ‘M’, there are more male students than female students.
Heatmap is a very popular way of data display, and it can use different color blocks to divide all attributes of the dataset into different hierarchical intervals, so as to visually display the data by partition. A heatmap of student features attributes of xAPI-Edu-Data is generated as shown in Figure 7.
Figure 7 is a heatmap that visually shows the degree of correlation between student attributes. It is not difficult to find that the most popular of these attributes include ‘Relation’, ‘raisedhands’, ‘VisITedResources’, ‘AnnouncementsView’, ‘Discussion’, ‘ParentAnsweringSurvey’, ‘ParentsschoolStatisfaction’, and ‘StudentAbsenceDays’.
Next, to further analyze the key features of the dataset, feature discovery is performed on the training dataset using the proposed model. The bar graphs in Figure 8 show how important these features are.
Looking at the experimental results in Figure 6, Figure 7 and Figure 8, it is concluded that there is a strong correlation between these attributes and student performance, including ‘AnnouncementsView’, ‘Discussion’, ‘raisedhands’, ‘VisITedResources’, ‘Topic’, ‘PlaceofBirth’, ‘StudentAbsenceDays’, ‘Gender’ and ‘Parent Response Survey’. In addition, it is not difficult to see from the data that students who are absent for more than 7 days rarely get high marks, and those who are absent for less than 7 days are rarely failed.
On the basis of analyzing the above xAPI-Edu-Data, the proposed model is compared with other experimental models on three evaluation indicators. Table 3 shows the simulation results of all experimental models on the xAPI-Edu-Data dataset.
It can be seen from Table 3 that the accuracy of the test results of all experimental algorithms is above 71%. On Precision, Logistc Regression can get the best results on ‘L’, and the proposed algorithm can achieve the best results of 0.85 on ‘M’, and the result of AdaBoost is superior to other experimental algorithms on ‘H’. In terms of Recall, Logistc Regression still can get the best results on ‘L’, and the proposed algorithm outperforms all experimental algorithm on ‘M’, and the experimental result of AdaBoost is as high as 0.91 on ‘H’. The last indicator F1-score, Logistc Regression still can get the best results on ‘L’, and the proposed algorithm still achieve the best results on ‘M’, and XGBoost and AdaBoost achieve the same results on ‘H’ and outperforms other experimental algorithms.
In summary, the classification on ‘L’, logistic regression is the best result. The proposed algorithm outperforms some of these experimental algorithms on datasets classified by different levels, and its accuracy is 0.84375, especially the classification results of ‘M’ are the best. These are related to the proposed algorithm using membrane structure and reaction rules to optimize the spiking neural network, and these mechanisms can well balance the relationship between exploration and utilization, they help the proposed algorithm to jump out of the local optima that approximates the global optimal solution.

4.2.2. Comparing Results with All Experimental Algorithms on the Student Performance Datasets

In this section, the student performance dataset from the UCI machine learning repository is selected as the experimental data to further validate the advantages of the proposed model in comparison with other experimental models.
First, we still analyze the relationship between student achievement and gender. As can be seen in Figure 9, it is clear that the grades of male and female are very similar, but the failing number of female is higher than male. The result of Figure 9 suggests that gender has little effect on final grades in the student performance dataset.
Next, heatmap is utilized to further analyze the interaction between different attributes of students, as shown in Figure 10. It is not difficult to find from Figure 10 that the influence between these attributes is large, such as ‘school’, ‘age’, ‘famsize’, ‘Medu’, ‘Mjob’, ‘reason’, ‘traveltime’, ‘failures’, ‘famsup’, ‘activities’, ‘higher’, ‘romantic’, ‘freetime’, ‘Dalc’, ‘health’, ‘absences’, ‘G1’, ‘G2’. Among these student attributes, ‘G1’ and ‘G2’ are the most important ones that directly affect the final grade.
Next, the feature importance curve is shown in Figure 11. The target attribute ‘G3’ has a strong correlation with the attributes ‘G1’ and ‘G2’. This is because ‘G3’ is the final year grade (issued in the third semester), while ‘G1’ and ‘G2’ correspond to the first and second semester grades. In Figure 11, it is difficult to predict ’G3’ without ‘G1’ and ‘G2’. However, it is possible to successfully predict ‘G3’ in the absence of ‘G1’ and ‘G2’ in the student performance dataset if more important data features can be found from other datasets.
Observing the above experimental results from Figure 9, Figure 10 and Figure 11, it can be concluded that there is a strong correlation between ‘G1’, ‘G2’ and students’ final grades. Therefore, student achievement is related to attributes such as ‘school’, ‘age’, ‘famsize’, ‘Medu’, ‘Mjob’, ‘reason’, ‘traveltime’, ‘failures’, ‘famsup’, ‘activities’, ‘higher’, ‘romantic’, ‘freetime’, ‘Dalc’, ‘health’, ‘absences’.
On this basis, the proposed and comparative models are executed on the dataset and evaluated using three metrics. Table 4 describes the comparison results with all experimental algorithms on the student performance dataset. ‘A’, ‘B’, ‘C’, ‘D’ and ‘E’ in Table 1 stand for excellent/very good, good, satisfactory, pass and fail.
To verify the advantages of the proposed algorithm, the proposed algorithm is conducted on the student performance dataset and compare the quantitative results with other experimental algorithms in Table 4.
In term of Precision, XGBoost, SVM and the proposed algorithm have the same and best results on ‘A’, and SVM and the proposed algorithm achieve the best results of 0.92 on ‘B’, and AdaBoost get the best results on ‘C’, and the proposed algorithm is better than on other experimental algorithms on ‘D’, and ’E’. For the second indicator Recall, SVM obtains the best results on ‘A’, and XGBoost, AdaBoost, SVM, and the proposed algorithm have the same and best result on ‘B’, and Decision trees, AdaBoost and the proposed algorithm achieve the best results on ‘C’, and AdaBoost on ’D’ have the best result, and XGBoost have the best result on ‘E’. The last indicator F1-score, the result of SVM is 1 on ‘A’ which means the best result compared with all experimental algorithms, and SVM and the proposed algorithm have the same results on ‘B’ that is better than other experimental algorithms, and AdaBoost on ‘C’ have the best result, and the proposed algorithm have the best result on ‘D’ and ‘E’. Finally, the accuracy achieved by the proposed algorithm is 0.814, which is better than all experimental algorithms.

4.2.3. Discussion

We selected 6 experimental algorithms on the xAPI-Edu-Data dataset and the student achievement dataset. Compared with these experimental algorithms, the effectiveness of the proposed algorithm is verified. We observed the experimental results from Table 3 and Table 4, we can easily find that the proposed algorithm has better classification accuracy, especially on the xAPI-Edu-Data dataset. However, in the classification of small sample data attributes, the prediction accuracy of the proposed algorithm is not significantly better than the comparison algorithms. In other words, the advantages of the proposed algorithm are more obvious on the attributes with more samples. Based on the above experimental results, it can be seen that the proposed algorithm effectively utilizes evolutionary membrane algorithm including objects, reaction rules and membrane structures, and is effective for training hyperparameters of spiking neural networks. The proposed model achieves good performance, but there are some shortcomings. First, the evolutionary spiking neural network method proposed in this paper can automatically learn effective representations of samples and achieve good model performance, but it is slightly insufficient in model interpretability. Second, this paper only uses the standard student achievement dataset, and only uses the basic attribute information and grades of students. The content of the course knowledge points and compulsory courses will not be explored and utilized. Comprehensive use of multivariate data related to student performance to train models will facilitate in-depth research on evolutionary spiking neural networks.
To sum up, centering on the research topic of student achievement prediction, this paper conducts a more in-depth and systematic research on student achievement prediction from the perspective of student achievement, and the focus of the work is to improve the predictability and accuracy of the proposed model for predicting student achievement based on an evolutionary spiking neural network.

5. Conclusions

With the advancement of educational data analysis, colleges and universities have gradually accumulated massive educational data resources. How to mine valuable information from these educational information data and use this information to provide better service and support for education and teaching has become an urgent problem to be solved. In this context, the emerging research direction of educational data mining came into being. As one of the important research branches of educational data mining, student achievement prediction has received extensive attention, and many scholars have carried out some fruitful work. However, there is still much room for improvement in the predictability and accuracy of existing research work on student achievement prediction. Therefore, this paper chooses the data mining technology based on the proposed evolutionary spiking neural network to conduct in-depth research in the field of education, taking student achievement as the research object, studying the attribute data of students, and trying to find a method to predict student achievement. The effectiveness of the proposed algorithm is verified by simulations on two benchmark datasets, and certain effects are achieved. The main work of this paper includes:
  • Analyzes and preprocesses student information. It is necessary to have a comprehensive understanding of student attribute data. One aspect is understanding the structure of the data in the student dataset and transforming the raw data into something that all experimental algorithms can use. On the other hand, it is for better feature extraction to find out the key elements in student data that affect final student achievement.
  • On the basis of a comprehensive understanding of all experimental algorithms, a student achievement prediction model based on evolutionary spiking neural network is established. This paper uses six different data mining algorithms for student achievement as a comparison between the experimental algorithms and the proposed prediction algorithm, and analyzes the effect of each prediction model.
  • A specific application case of a student achievement prediction model is proposed, which lays a foundation for the wider application of future student achievement prediction, and aims to provide new perspectives and ideas for the application of data mining in the field of education data mining.
  • The proposed model realizes the prediction of student achievement, and it can provide effective technical support for teaching management work such as teaching students in accordance with their aptitude in the early stage of the course, academic early warning, etc., thus providing theoretical basis and technical support for the management of students in colleges and universities.
With the increase of academic difficulty, the requirements for college students will be higher and higher. Colleges and universities will focus on improving the quality of undergraduate teaching, deepen the reform of education and strengthen the management of the teaching process, and effectively improve the learning effect of students and the quality of talent training as the ultimate goal. As an important technical means of students’ academic process management, the proposed student achievement model can promote the timely feedback of students’ academic problems, and realize early detection, early treatment and early resolution of academic problems. The student achievement prediction model proposed in this paper is based on the student behavior data to give early warning in the final exam and get the predicted grade result, so as to truly prevent problems before they occur. The research results of this paper are to use the students’ behavior data to judge the students’ current learning status, to judge the changes of test scores and whether to need early warning, and to judge whether to communicate with the students according to the students’ learning situation and the severity predicted by the model. Applying the work of this paper to the academic management of colleges and universities can not only distinguish students with academic problems, but also detect students who tend to fail in time to prevent these students from failing their subjects. In short, the application of data mining technology in the field of education is still in the exploratory stage, and it is also a popular research direction in the current data mining application field. There is also a lot of information worth mining in student behavior data to be discovered, although this research still has many shortcomings. With the further development of future research, research in related fields will inevitably make greater breakthroughs, and achieve greater results for the application of data mining in the field of education.

Author Contributions

Conceptualization, C.L. and Z.Y.; methodology, C.L. and Y.D.; software, H.W.; validation, C.L., H.W. and Z.Y.; writing—original draft preparation, C.L.; writing—review and editing, Y.D. and Z.Y. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by 69 batches of general funding projects from the China Postdoctoral Science Foundation, China (Grant No. 2021M693858), and Technological Innovation Program for Young People of Shenyang City, China (Grant No. RC210400), and Scientific Research Funding Project of the Education Department of Liaoning Province, China (Grant No. 2020JYT05).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The xAPI-Edu-Data dataset used for the evaluation are available on Alibaba Cloud Tianchi at https://tianchi.aliyun.com/dataset/dataDetail?dataId=23563 accessed on 17 December 2021, and the student performance dataset is available on the UCI Machine Learning Repository at https://archive.ics.uci.edu/ml/datasets/student+performance accessed on 17 December 2021.

Acknowledgments

The authors would like to thank the editors and reviewers for providing useful comments and suggestions to improve the quality of this article.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Namoun, A.; Alshanqiti, A. Predicting student performance using data mining and learning analytics techniques: A systematic literature review. Appl. Sci. 2020, 11, 237. [Google Scholar] [CrossRef]
  2. Hooshyar, D.; Pedaste, M.; Yang, Y. Mining educational data to predict students’ performance through procrastination behavior. Entropy 2019, 22, 12. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  3. Romero, C.; Ventura, S. Educational data mining and learning analytics: An updated survey. Wiley Interdiscip. Rev. Data Min. Knowl. Discov. 2020, 10, e1355. [Google Scholar] [CrossRef]
  4. Dutt, A.; Ismail, M.A.; Herawan, T. A Systematic Review on Educational Data Mining. IEEE Access 2017, 5, 15991–16005. [Google Scholar] [CrossRef]
  5. Salal, Y.; Abdullaev, S.; Kumar, M. Educational data mining: Student performance prediction in academic. Int. J. Eng. Adv. Technol. 2019, 8, 54–59. [Google Scholar]
  6. Chaparro-Pelaez, J.; Iglesias-Pradas, S.; Rodriguez-Sedano, F.J.; Acquila-Natale, E. Extraction, processing and visualization of peer assessment data in moodle. Appl. Sci. 2019, 10, 163. [Google Scholar] [CrossRef] [Green Version]
  7. Tsiakmaki, M.; Kostopoulos, G.; Kotsiantis, S.; Ragos, O. Implementing AutoML in educational data mining for prediction tasks. Appl. Sci. 2019, 10, 90. [Google Scholar] [CrossRef] [Green Version]
  8. Injadat, M.; Moubayed, A.; Nassif, A.B.; Shami, A. Systematic ensemble model selection approach for educational data mining. Knowl.-Based Syst. 2020, 200, 105992. [Google Scholar] [CrossRef]
  9. Cortez, P.; Silva, A.M.G. Using data mining to predict secondary school student performance. In Proceedings of the 5th Annual Future Business Technology Conference, Porto, Portugal, 9–11 April 2008. [Google Scholar]
  10. Ramesh, V.; Parkavi, P.; Ramar, K. Predicting student performance: A statistical and data mining approach. Int. J. Comput. Appl. 2013, 63, 35–39. [Google Scholar] [CrossRef]
  11. Arora, N.; Saini, J.R. A fuzzy probabilistic neural network for student’s academic performance prediction. Int. J. Innov. Res. Sci. Eng. Technol. 2013, 2, 4425–4432. [Google Scholar]
  12. Ezz, M.; Elshenawy, A. Adaptive recommendation system using machine learning algorithms for predicting student’s best academic program. Educ. Inf. Technol. 2020, 25, 2733–2746. [Google Scholar] [CrossRef]
  13. Pimentel, J.S.; Ospina, R.; Ara, A. Learning Time Acceleration in Support Vector Regression: A Case Study in Educational Data Mining. Stats 2021, 4, 41. [Google Scholar] [CrossRef]
  14. Yousafzai, B.K.; Khan, S.A.; Rahman, T.; Khan, I.; Ullah, I.; Ur Rehman, A.; Baz, M.; Hamam, H.; Cheikhrouhou, O. Student-performulator: Student academic performance using hybrid deep neural network. Sustainability 2021, 13, 9775. [Google Scholar] [CrossRef]
  15. Rastrollo-Guerrero, J.L.; Gómez-Pulido, J.A.; Durán-Domínguez, A. Analyzing and predicting students’ performance by means of machine learning: A review. Appl. Sci. 2020, 10, 1042. [Google Scholar] [CrossRef] [Green Version]
  16. Khan, A.; Ghosh, S.K. Student performance analysis and prediction in classroom learning: A review of educational data mining studies. Educ. Inf. Technol. 2021, 26, 205–240. [Google Scholar] [CrossRef]
  17. Liu, C.; Du, Y. A membrane algorithm based on chemical reaction optimization for many-objective optimization problems. Knowl.-Based Syst. 2019, 165, 306–320. [Google Scholar] [CrossRef]
  18. Liu, C.; Shen, W.; Zhang, L.; Du, Y.; Yuan, Z. Spike Neural Network Learning Algorithm Based on an Evolutionary Membrane Algorithm. IEEE Access 2021, 9, 17071–17082. [Google Scholar] [CrossRef]
  19. Ma, Y.; Cui, C.; Nie, X.; Yang, G.; Shaheed, K.; Yin, Y. Pre-course student performance prediction with multi-instance multi-label learning. Sci. China Inf. Sci. 2019, 62, 200–205. [Google Scholar] [CrossRef] [Green Version]
  20. Karthikeyan, V.G.; Thangaraj, P.; Karthik, S. Towards developing hybrid educational data mining model (HEDM) for efficient and accurate student performance evaluation. Soft Comput. 2020, 24, 18477–18487. [Google Scholar] [CrossRef]
  21. Ang, K.L.M.; Ge, F.L.; Seng, K.P. Big educational data & analytics: Survey, architecture and challenges. IEEE Access 2020, 8, 116392–116414. [Google Scholar]
  22. Sokkhey, P.; Navy, S.; Tong, L.; Okazaki, T. Multi-models of educational data mining for predicting student performance in mathematics: A case study on high schools in Cambodia. IEIE Trans. Smart Process. Comput. 2020, 9, 217–229. [Google Scholar] [CrossRef]
  23. Taherkhani, A.; Belatreche, A.; Li, Y.; Cosma, G.; Maguire, L.P.; McGinnity, T.M. A review of learning in biologically plausible spiking neural networks. Neural Netw. 2020, 122, 253–272. [Google Scholar] [CrossRef] [PubMed]
  24. Lobo, J.L.; Del Ser, J.; Bifet, A.; Kasabov, N. Spiking neural networks and online learning: An overview and perspectives. Neural Netw. 2020, 121, 88–100. [Google Scholar] [CrossRef] [PubMed]
  25. Demertzis, K.; Iliadis, L.; Bougoudis, I. Gryphon: A semi-supervised anomaly detection system based on one-class evolving spiking neural network. Neural Comput. Appl. 2020, 32, 4303–4314. [Google Scholar] [CrossRef]
  26. Salt, L.; Howard, D.; Indiveri, G.; Sandamirskaya, Y. Parameter optimization and learning in a spiking neural network for UAV obstacle avoidance targeting neuromorphic processors. IEEE Trans. Neural Netw. Learn. Syst. 2019, 31, 3305–3318. [Google Scholar] [CrossRef] [Green Version]
  27. Zhou, Y.; Jin, Y.; Ding, J. Surrogate-assisted evolutionary search of spiking neural architectures in liquid state machines. Neurocomputing 2020, 406, 12–23. [Google Scholar] [CrossRef]
  28. Tan, C.; Šarlija, M.; Kasabov, N. NeuroSense: Short-term emotion recognition and understanding based on spiking neural network modelling of spatio-temporal EEG patterns. Neurocomputing 2021, 434, 137–148. [Google Scholar] [CrossRef]
  29. Son, L.H.; Fujita, H. Neural-fuzzy with representative sets for prediction of student performance. Appl. Intell. 2019, 49, 172–187. [Google Scholar] [CrossRef]
  30. Mourad, N. Robust smoothing of one-dimensional data with missing and/or outlier values. IET Signal Process. 2021, 15, 323–336. [Google Scholar] [CrossRef]
  31. Xing, Y.Y.; Wu, X.Y.; Jiang, P.; Liu, Q. Dynamic Bayesian evaluation method for system reliability growth based on in-time correction. IEEE Trans. Reliab. 2010, 59, 309–312. [Google Scholar] [CrossRef]
  32. Oh, S.; Lee, S.; Woo, S.Y.; Kwon, D.; Im, J.; Hwang, J.; Bae, J.H.; Park, B.G.; Lee, J.H. Spiking Neural Networks With Time-to-First-Spike Coding Using TFT-Type Synaptic Device Model. IEEE Access 2021, 9, 78098–78107. [Google Scholar] [CrossRef]
Figure 1. Student achievement prediction process scheme.
Figure 1. Student achievement prediction process scheme.
Applsci 12 03841 g001
Figure 2. Architecture of Student Achievement Prediction Model.
Figure 2. Architecture of Student Achievement Prediction Model.
Applsci 12 03841 g002
Figure 3. Flowchart of Student Achievement Prediction Model.
Figure 3. Flowchart of Student Achievement Prediction Model.
Applsci 12 03841 g003
Figure 4. Student performance analysis on xAPI-Edu-Data.
Figure 4. Student performance analysis on xAPI-Edu-Data.
Applsci 12 03841 g004
Figure 5. Student performance analysis on the student performance dataset.
Figure 5. Student performance analysis on the student performance dataset.
Applsci 12 03841 g005
Figure 6. The relationship between student achievement and gender.
Figure 6. The relationship between student achievement and gender.
Applsci 12 03841 g006
Figure 7. Heatmap of student features attributes.
Figure 7. Heatmap of student features attributes.
Applsci 12 03841 g007
Figure 8. Feature importance comparison of student datasets.
Figure 8. Feature importance comparison of student datasets.
Applsci 12 03841 g008
Figure 9. The relationship between student achievement and gender.
Figure 9. The relationship between student achievement and gender.
Applsci 12 03841 g009
Figure 10. Heatmap of student features attributes.
Figure 10. Heatmap of student features attributes.
Applsci 12 03841 g010
Figure 11. Feature importance comparison of student datasets.
Figure 11. Feature importance comparison of student datasets.
Applsci 12 03841 g011
Table 1. The five-level classification system.
Table 1. The five-level classification system.
CategoryExcellent/Very GoodGoodSatisfactorySufficientFail
Original value16–2014–1512–1310–110–9
ClassificationABCDE
Encoding43210
Table 2. Four combined results of between the predicted value of the model and actual value.
Table 2. Four combined results of between the predicted value of the model and actual value.
Actual ValuePositiveNegative
Predicted Value
PositiveTPFN
NegativeFPTN
Table 3. Comparing results with all experimental algorithms on the xAPI-Edu-Data dataset.
Table 3. Comparing results with all experimental algorithms on the xAPI-Edu-Data dataset.
AlgorithmClassPrecisionRecallF1-ScoreSupportAccuracy
Logistic RegressionL0.871.000.93260.7604166666666666
M0.800.690.7448
H0.560.640.6022
Decision TreeL0.730.850.79260.7291666666666666
M0.750.690.7248
H0.680.680.6822
XGBoostL0.860.920.89260.82133333334
M0.860.790.8348
H0.750.820.7822
AdaBoostL0.860.920.89260.8333333333333334
M0.900.750.8248
H0.690.910.7822
Neural NetworkL0.860.920.89260.71875
M0.770.620.6948
H0.520.680.5922
SVML0.800.920.86260.8020833333333334
M0.810.790.8048
H0.790.680.7322
Proposed AlgorithmL0.770.770.77260.84375
M0.850.830.8448
H0.680.680.6822
Table 4. Comparing results with all experimental algorithms on the student performance dataset.
Table 4. Comparing results with all experimental algorithms on the student performance dataset.
Algorithm Class Precision Recall F1-Score Support Accuracy
Logistic RegressionA0.600.750.6740.7215189873417721
B0.750.640.6914
C0.330.620.438
D0.690.430.5321
E0.910.970.9432
Decision TreeA0.750.750.7540.7468354430379747
B0.900.640.7514
C0.380.750.508
D0.710.570.6321
E0.910.910.9132
XGBoostA1.000.250.4040.7721518987341772
B0.800.860.8314
C0.380.380.388
D0.720.620.6721
E0.861.000.9332
AdaBoostA0.000.000.0040.46835443037974683
B0.750.860.8014
C0.600.750.678
D0.360.900.5121
E0.000.000.0032
Neural NetworkA0.600.750.6740.6835443037974683
B0.730.570.6414
C0.330.500.408
D0.670.380.4821
E0.790.970.8732
SVMA1.001.001.0040.7974683544303798
B0.920.860.8914
C0.330.500.408
D0.750.570.6521
E0.910.970.9432
Proposed AlgorithmA1.000.750.8640.8140212658227848
B0.920.860.8914
C0.500.750.608
D0.830.710.7721
E0.940.970.9532
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Liu, C.; Wang, H.; Du, Y.; Yuan, Z. A Predictive Model for Student Achievement Using Spiking Neural Networks Based on Educational Data. Appl. Sci. 2022, 12, 3841. https://doi.org/10.3390/app12083841

AMA Style

Liu C, Wang H, Du Y, Yuan Z. A Predictive Model for Student Achievement Using Spiking Neural Networks Based on Educational Data. Applied Sciences. 2022; 12(8):3841. https://doi.org/10.3390/app12083841

Chicago/Turabian Style

Liu, Chuang, Haojie Wang, Yingkui Du, and Zhonghu Yuan. 2022. "A Predictive Model for Student Achievement Using Spiking Neural Networks Based on Educational Data" Applied Sciences 12, no. 8: 3841. https://doi.org/10.3390/app12083841

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop