1. Introduction
The implementation of artificial neural networks (ANNs) in modern electronic devices requires different network topologies depending on the solving tasks, which vary from clustering and classification to pattern recognition. Based on the available training data, the learning model for the network can be supervised or unsupervised. The first principle, relying on labeled datasets, takes place in Convolutional Neural Networks (CNNs), which have recently shown impressive performance in cognitive tasks such as recognition [
1,
2] and prediction [
3,
4]. During supervised learning, the error between the input target and the output of a CNN is minimized by adjusting the synaptic weights of the network. High accuracy can only be achieved with a large number of training examples and algorithm cycles, which entails the application of massive-power-consuming computers. This imposes a significant restriction on the application of CNNs in embedded systems, motivating the development of neural networks of the third generation—Spiking Neural Networks (SNNs). SNNs, being unsupervised learners, provide the most realistic natural neural network emulation. Like neurons in living organisms, SNNs encode information as sequences of spikes, the precise timing between which is used to update the synaptic weights. The main advantages of SNNs over the previous-generation ANNs are their computational speed, superior classification abilities, and efficiency in control problems, resulting from their ability to derive meaning from few pieces of information about the target. The most promising practical application of SNNs is the construction of an interface between silicon and biological neurons, which in future could bring the ability of direct human brain–computer interaction and the development of bionic prosthetic systems, such as thought-driven limbs and neural prostheses for restoring cognitive functions [
5,
6].
Meanwhile, the limits of a standard computer’s performance for SNN simulation require special hardware for its acceleration [
6]. A perfect platform for implementing neural networks must be massively parallel, algorithmically flexible, and of low power consumption. Massive parallelism can be achieved with either an analog or digital simulation approach. Analog simulations of neurons using silicon-based Very-Large-Scale Integration (VLSI) circuits were pioneered by Mead [
7] in the early 1980s with a focus on their low power consumption when compared to digital systems [
8,
9]. Modern analog circuits for the implementation of neural oscillators and synaptic memory often include memristive [
10,
11] or other experimental CMOS-compatible devices. Despite all these advantages, a common drawback of analog neuromorphic circuits is the fundamental limitations on measurement: it is impossible to organize the monitoring of all state variables for each neuron and, therefore, flexible control of these variables. Thus, from the algorithmic flexibility point of view, neural network simulation based on digital devices remains relevant. Although advances in GPU-based acceleration of neural network simulation have been reported [
12], most researchers consider FPGAs to be the better-fitting digital platform. The advantages of FPGAs, including great flexibility, low power consumption, the ability to work in real time, and small dimensions, have already facilitated the use of SNN algorithms for embedded systems, for example, for MIMO temperature management [
13] or the detection of impurities in natural gas [
14].
The use of FPGA implies two features: the use of discrete models instead of continuous and the use of fixed-point arithmetic. To conserve the neural model properties, it is necessary to choose a numerical method that will ensure good correspondence between the dynamics of a continuous model and those of a discrete model, including the time step, under conditions of limited number representation accuracy. Nowadays, researchers pay little attention to these implementation features. For example, in the FPGA-based SNN simulator made by Pani et al. [
6], the simplest solver, that is, the explicit Euler method, is used to implement the Izhikevich neuron. Meanwhile, in a comprehensive study on SNN numerical simulation [
15], strong evidence was presented indicating that the use of first-order numerical methods with large steps leads to totally incorrect neuron model dynamics. A very recent article [
16] substantiated both the interest in fixed-point arithmetic and the need to perform fixed-point neuron modeling via a specific approach.
This paper provides the results of an investigation of the neuron model described by the simplified Hodgkin–Huxley equations [
17] in a fixed-point implementation. In
Section 2, we propose the neuron fixed-point model, as well as the data type conversion (scaling) technique. In
Section 3, the numerical experiments, including resonance excitability analysis, chaotic spiking generation analysis, and examination of the neural refractory period and hysteresis, are described. Simulations were carried out in NI LabVIEW 2019 software.
Section 4 concludes the paper. A comparative table and some recommendations are given here.
2. Numerical Simulation of the Simplified Hodgkin–Huxley Model Using Fixed-Point Arithmetic
The original system of Hodgkin–Huxley (HH) equations [
18] is a classical phenomenological neuron model that determines the dynamical behavior of membrane ion gates. This dynamical system is of the fourth order and includes transcendental functions that make it time-consuming for large-scale computer simulations and complicated for pure mathematical analysis. Insightful simplifications of the HH model to two-dimensional systems were presented by Rinzel [
19] and later by Wilson [
17]; the second one is used in this study.
An equivalent electrical circuit for the simplified HH model is shown in
Figure 1. The circuit comprises membrane capacitance
C and two voltage-sensitive conductive elements
GNa and
GK, accordingly connected in series with batteries
ENa and
EK.
The circuit dynamics is described by the following differential equations:
where
V is the potential difference between the neuron’s membrane and the environment,
R is the recovery variable,
I is the input current,
C = 0.8 µF/сm2 is the membrane capacity, and
τ = 1.9 ms is the recovery time constant. The right-hand side of the first equation is the sum of the input current and Na+ and K+ ion currents. The passive leakage current of the original HH model is absorbed into the Na+ current. The second equation represents the behavior of the recovery variable R, which describes the K+ channel as a memristive element.
To move from a continuous model of dynamical system (1) to the set of investigated ordinary differential equation (ODE) solvers, the following methods of numerical integration were used: the Explicit Euler method (EE), the Semi-Explicit Euler method (SEE), the Explicit Midpoint method (EMP), and the Modified Explicit Midpoint method with a smoothing step (MEMP). The choice of explicit methods of the first and second order was determined by the simplicity of their implementation in integer representation and the visibility of the observed numerical effects.
2.1. Floating to Fixed Point Model Conversion
Conversions of ODE solvers with floating points to integer solvers were implemented using the approach described in [
20]. First, the minimum and maximum possible values of each state variable of system (1) were determined by preliminary simulation. After that, the largest modulus value was selected among all state variables and system parameters to determine the required number of bits to store the integer part of the fixed-point data type (FXP). It was shown that to store the integer part of the state variables and parameters required not less than six bits. Thus, all state variables and constant coefficients of system (1) were converted to the FXP data type, where one bit is allocated for a sign, seven bits are allocated for storing the integer part, and the remaining bits are allocated for the fractional part. It should be noticed that the number of bits of the integer part was increased by one in order to guarantee an overflow avoidance of the bit grid during calculations. In the research, integer models of 32-bit and 64-bit FXP data types were explored, the fractional parts of which take 24 and 56 bits, respectively. Investigation of models with a longer bit grid is not of interest at the moment, since modern computing platforms do not support arithmetic operations on a hardware level for machine words longer than 64 bits. For the same type, the software implementation of these operations requires additional hardware costs that negate all the benefits of using an integer data type. It would be advantageous to get 16-bit FXP system models, which would allow the use of low-power and less-expensive hardware platforms for implementing large neural networks. However, preliminary simulation showed that system (1) cannot be adequately represented in a case of such limited bit length.
After converting all variables and constants to the FXP data type, adequate simulation is still not guaranteed. In the process of calculations, the values of state variables can overflow the bit grid. This situation is most probable while calculating the
part of the first equation of system (1). This was considered while generating the FXP solvers and compensated by organizing the correct order of calculation. For example, the algorithm suitable for the integer implementation of the ODE solver of system (1), constructed on the basis of the Euler method, looks as follows:
where
h is the constant integration step size, and
i is the solution time.
2.2. Accuracy of the Fixed Point Simulation
Let us estimate the accuracy of various implementations of system (1) on the basis of the difference between the values of the state variables of the 32-bit and 64-bit FXP models in comparison to the floating-point model (double, DBL) in the time domain.
Figure 2 provides charts of absolute error for the EMP-based solvers.
The simulation was performed with the parameters given in equation (1). The initial conditions for all models are the same:
С(0) = −0.65,
R(0) = 0.097. The integration step size is
h = 0.001. The input current was set according to the following law:
I = 0.075 + 0.007 sin[2
π 0.2646(
i + h)]. The models based on the EE, SEE, and MEMP methods demonstrate similar behavior and errors in the case of the same simulation parameters. With increasing simulation time, the effects of the accumulation of differences between the FXP and DBL models can be observed (
Figure 3).
Figure 3 shows that error accumulation leads to a complete divergence between the trajectories of the two models. For the 32-bit EMP-based solver, it occurs after 30 ms of simulation, while the 64-bit solver switches to a different operational mode after 400 ms. It should be noted that the process of distancing the trajectory of the integer solution from the trajectory of the solution of the original algorithm is inevitable not only due to the limitations of the bit grid but also because of the different order of arithmetic operations in the solvers. However, the stability of the solution is retained, as was confirmed in a series of experiments.
4. Discussion and Conclusions
Software models can be not applicable in many practical and scientific tasks because of their low performance and negative influence of the operating systems in real-time applications. In the last decade, scholars have intensively worked on high-speed realistic implementation of neuromorphic system in hardware. Meanwhile, these studies focused mainly on the interaction of neurons and the overall network architecture, while comparatively little attention has been paid to numerical solvers needed to synthesize finite-difference models reproducing single neuron dynamics. For example, in recent work by Pani et al. [
2], a 32-bit FPGA implementation of the Izhikevich neuron using the explicit Euler method was shown. A good correspondence between the fixed-point model and the floating-point simulation was established; this may confuse followers, as for the various neuron models, different network architecture and different bit-length FPGA simulation results may not be that close to the software simulation results. In our work, we focused on the dynamics of individual neurons and studied it in the case of various numerical methods and bit lengths to help developers correctly choose a numerical implementation.
Our present study is devoted to the possibility of a realistic spiking neuron model implementation (the simplified Hodgkin–Huxley model) using fixed-point arithmetic with bit lengths of 32 (FXP32) and 64 (FXP64) bits, which is relevant when implementing neuromorphic systems on FPGAs. For numerical simulation of the neuron dynamics, four methods were chosen: the Explicit Euler method (EE), the Semi-Explicit Euler method (SEE), the Explicit Midpoint method (EMP), and the Modified Explicit Midpoint method with a smoothing step (MEMP). These methods were tested on various problems often found in the scientific literature on natural and artificial neuron studies. Therefore, the obtained results can be considered valid for a wide class of problems associated with neuron simulation.
To summarize the results of all the experiments,
Table 6 is given. In the left column for each bit length, the method providing the smallest simulation error compared to the double (DBL) data type is placed. Methods listed in the right-hand columns are the runners-up.
The following conclusions can be drawn from the results of the study:
With a 32-bit word length, the EMP method having second-order algebraic accuracy turned out to be the best solver. Among the first-order methods, the SEE method is preferable. The MEMP method seems to be worse for a low-bit implementation due to the higher number of arithmetical operations.
With a bit length of 64 bits, the MEMP method turned out to be the most accurate. The other three methods competed well with each other for accuracy in various tests. Taking into account its second-order algebraic accuracy, the EMP method is preferable.
Figure 12 shows that when using the EMP and MEMP methods with different integration steps, the dynamics of the neuron remains relatively unchanged, while the EE and SEE methods strongly affect the dynamics. The difference in the number of time intervals at various steps can reach 2 times, so the use of the Euler method and its modifications in neural network emulators cannot be recommended.
Summarizing the two last points, in the general case, one should choose the EMP method to reproduce the dynamics of neurons in fixed-point arithmetic.
In future studies, we will check intermediate bit lengths (such as 40 bits) to establish a smoother dependence of the method preference according to data type. The implementation of a neuron and neuromorphic system model on FPGA is also of interest.