Next Article in Journal
Some New Refinements of Trapezium-Type Integral Inequalities in Connection with Generalized Fractional Integrals
Next Article in Special Issue
Some New Integral Inequalities Involving Fractional Operator with Applications to Probability Density Functions and Special Means
Previous Article in Journal
Homothetic Symmetries of Static Cylindrically Symmetric Spacetimes—A Rif Tree Approach
Previous Article in Special Issue
Almost Periodic Solution for Forced Perturbed Non-Instantaneous Impulsive Model
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Training Neural Networks by Time-Fractional Gradient Descent

School of Mathematics and Statistics, Guizhou University, Guiyang 550025, China
*
Author to whom correspondence should be addressed.
Axioms 2022, 11(10), 507; https://doi.org/10.3390/axioms11100507
Submission received: 1 September 2022 / Revised: 20 September 2022 / Accepted: 22 September 2022 / Published: 26 September 2022
(This article belongs to the Special Issue Impulsive, Delay and Fractional Order Systems)

Abstract

:
Motivated by the weighted averaging method for training neural networks, we study the time-fractional gradient descent (TFGD) method based on the time-fractional gradient flow and explore the influence of memory dependence on neural network training. The TFGD algorithm in this paper is studied via theoretical derivations and neural network training experiments. Compared with the common gradient descent (GD) algorithm, the optimization effect of the time-fractional gradient descent algorithm is significant when the value of fractional α is close to 1, under the condition of appropriate learning rate η . The comparison is extended to experiments on the MNIST dataset with various learning rates. It is verified that the TFGD has potential advantages when the fractional α nears 0.95∼0.99. This suggests that the memory dependence can improve training performance of neural networks.

1. Introduction

Many problems arising in machine learning, and intelligent systems are usually reduced to optimization problems. Neural network training is essentially a process that minimizes model errors. The GD method, in particular the stochastic GD method has been extensively applied to solve such problems from different perspectives [1].
To improve the optimization error and the generalization error in the training of neural networks, the parameter-averaging technique that combines iterative solutions has been employed. Most works adopting simple averaging based on all iterative solutions generally cannot obtain satisfactory performance regardless of their advantages. Bottou [2] and Hardt et al. [3] uses the simple output of the last solution, which leads to faster convergence but has less stability in the optimization error. Some works [4,5] also analyze the stochastic GD method by the uniform averaging. To improve the convergence rate of strongly convex function optimization, the non-uniform averaging method has been proposed in [6,7]. However, the averaging method that combines all iterative solutions into a single solution remains to be discussed. Guo et al. [8] provide a beneficial attempt for this problem. For the non-strongly convex objectives, they analyze the optimization error and generalization error by synthesizing a polynomial increasing weighted average scheme. Again, based on a new primal averaging, Tao et al. [9] attain the optimal individual convergence using a simple modified gradient evaluation step. By a simple averaging of multiple points along the trajectory of stochastic GD, Izmailov et al. [10] also obtain a better generalization than conventional training and show that the stochastic weight averaging procedure finds much flatter solutions than stochastic GD.
The core formula in the GD algorithm is to calculate the derivative of the loss function. Fractional calculus is applied to the GD algorithm in the training of neural networks, which is called the fractional GD method since it gives better optimization performance, especially the stability of algorithm. Khan et al. [11] present a fractional gradient descent-based learning algorithm for radial-basis-function neural networks, based on Riemann–Liouville derivative and a convex combination of improved fractional GD. Bao et al. [12] provide a Caputo fractional-order deep back-propagation neural network model armed with L 2 regularization. A fractional gradient descent method for the BP neural networks is proposed in [13]. In particular, the Caputo derivative is used to calculate the fractional-order gradient of the error expressed by the traditional quadratic energy function. An adaptive fractional-order BP neural network for handwritten digit recognition problems is presented in [14], combining a competitive evolutionary algorithm. The general convergence problem of the fractional-order gradient method has been tentatively investigated in [15].
There are a great deal of works on the fractional GD method, and we will not list them here.
We now turn to the time-fraction gradient flow. Two classical examples are the Allen–Cahn equation and the Cahn–Hilliard equation. The time-fractional Allen–Cahn equation has been investigated both theoretically and numerically [16,17,18,19,20]. Tang et al. [20] establish the energy dissipation law of the fractional time-field equation. Specifically, they prove that the fractional time-field model admits an integral-type energy dissipation law at a continuous level. Moreover, the discrete version of the energy-dissipative law is also inherited. Liao et al. [17] design a variable-step Crank–Nicolson-type scheme for the time fractional Allen–Cahn equation, which is unconditionally stable. The proposed scheme can preserve both the energy stability and the maximum principle. Liu et al. [18] investigate the fractional Allen–Cahn and Cahn–Hilliard phase field models and give an effect finite difference and Fourier spectral schemes. On the other hand, for the complex time-fractional Schrödinger equation and the space–time fractional differential equation, novel precise solitary wave solutions are obtained by the modified simple equation scheme [21]. A number of solitary envelope solutions of the quadratic nonlinear Klein–Gordon equation also are constructed by the generalized Kudryashov and extended Sinh–Gordon expansion schemes [22].
In the training of neural networks, the parameter averaging technique can be regarded as memory-dependent, which just corresponds to an important feature of the time-fractional derivative. The weighted averaging cannot be easily used to analyze theoretically; however, the time-fractional derivative is well suited to this purpose. The main objective of this paper is to develop a TFGD algorithm based on the time-fractional gradient flow and theoretical derivations and then to explore the influence of memory dependence on training neural networks.

2. The Time-Fractional Allen–Cahn Equation

This section will be devoted to introducing the time-fractional Allen–Cahn equation and its basic properties. Consider the following time-fractional Allen–Cahn equation,
t α u = ε 2 Δ u f ( u ) , for x Ω and t > 0 ,
where the domain Ω = ( 0 , L ) 2 R 2 , and ε is an interface width parameter and u is a function of ( x , t ) . The notation t α represents the Caputo derivative of order α defined as
t α v = 0 C D t α = def I t 1 α v , α ( 0 , 1 ) ,
where I t α is the Riemann–Liouville fractional integration operator of order α > 0 ,
( I t α v ) = def 0 t ω α ( t s ) v ( s ) d s , ω α ( t ) = def t α 1 Γ ( α ) .
The nonlinear function f ( u ) is associated with the derivative of the bulk energy density function F ( u ) , which is usually chosen as
F ( u ) = 1 4 ( 1 u 2 ) 2 .
The function F ( u ) is of bistable type and admits two local minima.
When the fractional order parameter α = 1 , Equation (1) immediately becomes the classical Allen–Cahn equation,
t u = ε 2 Δ u f ( u ) .
The Allen–Cahn Equation (3) can be viewed as an L 2 gradient flow,
t u = δ E δ u ,
where δ E δ u represents the first-order variational derivative, and the corresponding free-energy functional E [ u ] is defined as
E [ u ] ( t ) = Ω ε 2 2 | u | 2 + F ( u ) d x .
It is well-known that Equation (3) satisfies two important properties: one is the energy dissipation,
d E d t = δ E δ u 2 , or E [ u ] ( t ) E [ u ] ( s ) , t > s ,
and the other is the maximum principle,
| u ( x , t ) | 1 if | u ( x , 0 ) | 1 .
These two properties play an important role in constructing stable numerical schemes.
In a similar way, the time-fractional Allen–Cahn Equation (1) can be also regarded as a fractional gradient flow,
t α u = δ E δ u .
It can be found in [20] that Equation (1) admits the maximum principle but only satisfies the special energy inequality,
E [ u ] ( t ) E [ u ] ( 0 ) , t > 0 .

TFGD Model and Numerical Schemes

Before giving numerical algorithms, we write down the TFGD model based on training neural networks.
In machine learning, the main task of learning a predictive model is typically formulated as the minimization problem of the empirical risk function,
min J ( w ) , J ( ω ) = E x , y L ( y , f ( x ; w ) ) ,
where w is a set of parameters in the neural networks and L ( · , · ) is the loss function for each sample, which quantifies the deviation between the predicted value f and the respective true one y. When the gradient descent is applied, the iteration of weights for (6) can be given by    
w k + 1 = w k ξ J ( w k ) ,
where ξ represents a positive stepsize. The continuous version is the gradient flow,
w t = J ( w ) .
In order to solve (7) associated with (6), existing works mostly consider the weighted averaging schemes over the training process instead of the newest weight, that is,
w ˜ k = i = 1 k γ i w i .
The simplest case of (8) is the arithmetic mean, where γ i are constants. In this paper, we will consider non-constant cases such as γ i β i for some β 1 and propose a new type of weight that is connected to the fractional gradients.
Now, we consider the following model equation:
t α w ( t ) = J ( w ) , 0 < α < 1 ,
w ( 0 ) = w 0 ,
where t α denotes the Caputo fractional derivative operator defined by
t α w ( t ) = 1 Γ ( 1 α ) 0 t ( t s ) α d d s w ( s ) d s = d d t 0 t ( t s ) α Γ ( 1 α ) ( w ( s ) w ( 0 ) ) d s = P . V . 0 t ( t s ) α 1 Γ ( α ) ( w ( s ) w ( 0 ) ) d s
with P . V . being the principal value.
Remark 1.
For the gradient flow w t = J ( w ) , the loss is descent since t J ( w ) 0 . For the case of the fractional flow, however, it can be seen from [20] that there is a regularized loss that is descending. In addition, when α 0 , the Equation (9) recovers the classical Allen–Cahn equation.
We know that Equation (9) corresponds to a fractional gradient flow (5). Similar to the work [20], to maintain the energy dissipative law, we define the modified variational energy as
E α ( w ) = J ( w ) + 1 2 I t α J ( w ) 2 = def J ( w ) + R ( w ) ,
where the non-negative term R ( w ) is a regularization term, which is given by
R ( w ) = 1 2 0 t 1 ( t s ) 1 α Γ ( α ) J ( w ( s ) ) 2 d s .
Then the modified fractional gradient flow of (12) is given by
t α w = δ E α δ w .
Following the work [20], we have the following dissipation properties.
Theorem 1.
The modified variational energy E α is dissipative along the time-fractional gradient flow (14).
Proof. 
To begin with, the time-fractional Allen–Cahn Equation (9) can be equivalently reformulated as
t w ( t ) = R t 1 α J ( w ) ,
where R t α = 0 R L D t α is the Riemann–Liouville derivative defined by
R t α v = def t I t 1 α v , α ( 0 , 1 ) .
Along the time fractional gradient flow t α w ( t ) = J ( w ) , we have
d J ( u ) d t = J ( w ) , t w = J ( w ) , R t 1 α J ( w ) .
Using the inequality from [23],
v ( t ) R t 1 α v ( t ) 1 2 R t 1 α v 2 ( t ) + 1 2 ω α ( t ) v 2 ( t ) , v s . C [ 0 , T ] ,
and letting v ( t ) = J ( w ) , we derive from (15) and (17) that
d J ( w ) d t 1 2 R t 1 α J ( w ) 2 1 2 ω α ( t ) J ( w ) 2 = t I t α J ( w ) 2 1 2 ω α ( t ) J ( w ) 2 .
Therefore, we obtain
d E α d t = d J ( w ) d t + t I t α J ( w ) 2 1 2 ω α ( t ) J ( w ) 2 0 .
   □
Remark 2.
Armed with (2) and (16), we see that the Caputo derivative 0 C D t α and the Riemann–Liouville derivative 0 R L D t α can be defined by the Riemann–Liouville fractional integration operator,
0 R L D t α v ( t ) = t I t 1 α v = 1 Γ ( 1 α ) d d t 0 t ( t s ) α v ( s ) d s , 0 C D t α v ( t ) = I t 1 α v = 1 Γ ( 1 α ) 0 t ( t s ) α v ( s ) d s .
For an absolutely continuous (derivable) function v : R + R , the Caputo fractional derivative and the Riemann–Liouville fractional derivative can be connected with each other by the following relations:
0 C D t α v ( t ) = 0 R L D t α ( v ( t ) v ( 0 ) ) , o r 0 C D t α v ( t ) = 0 R L D t α v ( t ) v ( 0 ) Γ ( 1 α ) t α ,
where 0 < α < 1 . Compared with the Riemann–Liouville derivative, the Caputo derivative is more flexible for handling initial and boundary value problems.
The remaining part of this section is to construct the numerical scheme for the TFDG model by weighted averaging with the newest weight. The first-order accurate Grünwald–Letnikov formula for
P . V . 0 t ( t s ) α 1 Γ ( α ) w ( s ) d s
is given by
P . V . 0 t n ( t n s ) α 1 Γ ( α ) w ( s ) d s 1 η α k = 0 n ϕ n k ( α ) w ( t k ) ,
where t k = k η , η is time step size, and ϕ n ( α ) = ( 1 ) n α n = Γ ( n α ) Γ ( α ) Γ ( n + 1 ) satisfies
ϕ ( α ) ( z ) = ( 1 z ) α = n = 0 ω n ( α ) z n , | z | 1 ,
which can be calculated by the recurrence formula
ϕ n ( α ) = n 1 α n ϕ n 1 ( α ) , ϕ 0 ( α ) = 1 .
Then, from (11) and (18), we obtain
t α w ( t n ) 1 η α k = 0 n ϕ n k ( α ) ( w ( t k ) w ( 0 ) ) .
This provides a training scheme
1 η α i = 0 k + 1 ω k + 1 i ( α ) ( u i u 0 ) = J ( w n ) ,
which is reformulated as
w k + 1 = w 0 η α J ( w k ) + i = 0 k ϕ k + 1 i ( α ) ( w i w 0 ) .
It should be emphasized that scheme (21) is new and is a key step in the time-fractional gradient descent algorithm, while other steps are similar to those in usual gradient descent.

3. Numerical Simulation and Empirical Analysis

In this section, we will present numerical experiments of neural networks to verify our theoretical findings by combining the actual data and the classical MNIST data set. The experiment involves two important parameters: the learning rate (lr) denoted by η and the fractional-order parameter α . We will explore the influence of the two parameters on the TFGD algorithm and compare it with the general GD algorithm.
We now present the algorithm description for TFGD, which is a simple but effective modification for the general GD algorithm in the training neural networks. The TFGD procedure is summarized in Algorithm 1.
Algorithm 1: Time-fractional gradient descent (TFGD)
   Input: fractional-order parameter α ( 0 , 1 ) , weight w 0 , LR bounds η 1 , η 2 , cycle length c (for constant learning rate c = 1 ), number of iterations n
   Ensure: weight w
      w w 0  {Initialize weights with w 0 }
     for  k 1 , 2 , , n  do
         η η ( k )  {Calculate LR for the iteration}
        Calculate the gradient g k J k ( w )
         w w η g k   {for GD update}                                                            ( )
        or execute
         w w η α g k + i = 0 k ϕ k + 1 i ( α ) ( w i w 0 )    {for TFGD update}       ( )
        Store w as w k {for TFGD}
     end for
Some explanations for the TFGD algorithm are indispensable. We linearly decrease the learning rate from η 1 to η 2 in each cycle. The formula for the learning rate at iteration k is given by
η ( k ) = ( 1 t ( k ) ) η 1 + t ( k ) η 2 , t ( k ) = 1 c mod ( k 1 , c ) + 1 ,
where the base learning rates η 1 η 2 and the cycle length c are hyper-parameters of the method. On the other hand, when choosing ( ) to update w, Algorithm 1 corresponds to the general GD algorithm. When ( ) is chosen to update w, Algorithm 1 becomes the TFGD algorithm. Again, when updating the step ( ) , the calculation of ϕ ( α ) is executed by the recurrence formula (19).

3.1. Numerical Simulation

The experimental data are randomly set. The date x is randomly taken as 20,000 points on the interval of ( 1 , 1 ) , and the value y is obtained by the test function
f ( x ) = sin ( x ) + sin ( π x ) + sin ( e x ) .
The neural network model with two hidden layers is applied in this experiment. The input is one and the output is one, and two hidden layers contain 25 neurons.
We next choose the two parameters. The learning rate η are taken as two different value [ 0.1 , 0.01 ] , and the fractional order parameter α are chosen as [ 0.7 , 0.8 , 0.9 , 0.95 , 0.99 ] which belong to ( 0.5 , 1 ) .
In the neural network training experiment based on the TFGD algorithm, each set of data on the basis of 500 iterations will be repeatedly run 20 times for training. Then, we take the average loss of 20 times of repeated experiments when the loss of each iteration is the lowest. We obtain the obvious loss effects of the neural network associated with different α , when the learning rate η is taken as 0.01 or 0.1, respectively. The experiment result is shown in Figure 1.
Next, to make the effect of the neural network training experiment more clear and intuitive, the learning rate is fixed at 0.1 and 0.01. We compare the loss effects of the full batch gradient descent algorithm and the TFGD algorithm. The results are shown in Figure 2.
From Figure 2, we can verify the following two facts:
  • For different learning rates ( η = 0.01 , 0.1 ) and the fractional order parameter α = 0.99 , the effect of the neural network optimization based on the TFGD algorithm is significantly better than that of the general CD algorithm. Again, the fractional order parameter α is larger, and the optimization effect of the neural network is better.
  • When η = 0.1 , α = 0.99 or η = 0.01 , α = 0.9 , 0.99 , the loss effect of the TFGD algorithm is better than that of the general GD algorithm. However, the loss effect of TFGD is the same as that of the general GD if η = 0.01 , α = 0.95 .
In a word, under the condition of selecting the appropriate learning rate, if the value of α is larger, the loss effect of the TFGD is better. This verifies that the memory dependence has an impact on the training of neural networks.

3.2. Empirical Analysis

To further determine the optimization effect of the TFGD algorithm in the neural network and to verify the accuracy of the previous numerical experiments, I apply the TFDG algorithm to the neural network optimization of real data. In the empirical analysis, I mainly use the classical handwritten digital data set, i.e., the MINST data set. To improve the efficiency of the training, the parameter α is taken as [ 0.95 , 0.99 ] , and the different learning rates η = 0.01 , 0.1 are also selected.
The input data of this experimental model are 784-dimensional (28 × 28), the output data are one-dimensional, and the cross entropy is used as the loss function. To improve the efficiency of neural network training, the first 1000 data of handwritten digital data sets are selected for the experiment. We run the TFGD algorithm with 100 iterations. Under different learning rates ( η = 0.01 or η = 0.1 ), the optimization effect of the TFGD algorithm is better than that of the general GD algorithm. Again, for the TFGD algorithm, the loss effect with α = 0.99 is better than that with α = 0.95 . The corresponding results are shown in Figure 3.
In the above, armed with the classical handwritten digital data sets, the neural network training experiments based on the TFGD algorithm and the general GD algorithm are carried out, respectively. At different learning rates, we verify that the memory dependence affects neural network training.

4. Conclusions

This paper mainly studies the TFGD algorithm based on a weighted average to optimize the neural network training, which finally leads to the better generalization performance and the lower loss. Specifically, the parameter averaging method in neural network training can be regarded as memory-dependent, which just corresponds to an important feature of the time-fractional derivative. For theoretical analysis, the parameter averaging method is not convenient; however, the time-fractional derivative is more suitable. Again, the energy dissipation law and numerical stability of the time-fractional derivative are all inherited. Based on the above advantages, the new TFGD algorithm is proposed. We verify that the TFGD algorithm has potential advantages when the fractional order parameter α nears 0.95∼0.99. This implies that the memory dependence could improve the training performance of neural networks.
There are many exciting directions for future works. The TFGD algorithm is only applied to the function fitting and the image classification in one dimension. In order to verify the applicability and effectiveness of this algorithm, we consider more general numerical examples, such as fittings of the multi-dimensional function, CIFAR10, CIFAR100, and the other problems of image classification. The convergence analysis of this algorithm could be developed for future research. The TFGD algorithm is a new attempt that combines the averaging method in neural networks with the time-fractional gradient descent. We hope that the algorithm will inspire further work in this area.

Author Contributions

The contributions of J.X. and S.L. are equal. All authors have read and agreed to the published version of the manuscript.

Funding

Sirui Li is supported by the Growth Foundation for Youth Science and Technology Talent of Educational Commission of Guizhou Province of China under grant No. [2021]087.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Acknowledgments

We would like to thank the editors and the referees for their valuable comments and suggestions to improve our paper. We also would like to thank Yongqiang Cai from Beijing Normal University and Fanhai Zeng from Shandong University for their helpful discussions and recommendations regarding this problem.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Bottou, L.; Curtis, F.; Nocedal, J. Optimization methods for large-scale machine learning. SIAM Rev. 2018, 60, 223–311. [Google Scholar] [CrossRef]
  2. Bottou, L. Large-scale machine learning with stochastic gradient descent. In Proceedings of the International Conference on Computational Statistics, Paris, France, 22–27 August 2010; pp. 177–186. [Google Scholar]
  3. Hardt, M.; Recht, B.; Singer, Y. Train faster, generalize better: Stability of stochastic gradient descent. In Proceedings of the International Conference on Machine Learning, New York, NY, USA, 19–24 June 2016. [Google Scholar]
  4. Polyak, B.T.; Juditsky, A.B. Acceleration of stochastic approximation by averaging. SIAM J. Control. Optim. 1992, 30, 838–855. [Google Scholar] [CrossRef]
  5. Zinkevich, M. Online convex programming and generalized infinitesimal gradient ascent. In Proceedings of the 20th International Conference on Machine Learning (ICML-03), Washington, DC, USA, 21–24 August 2003; pp. 928–936. [Google Scholar]
  6. Rakhlin, A.; Shamir, O.; Sridharan, K. Making gradient descent optimal for strongly convex stochastic optimization. arXiv 2011, arXiv:1109.5647. [Google Scholar]
  7. Shamir, O.; Zhang, T. Stochastic gradient descent for non-smooth optimization: Convergence results and optimal averaging schemes. In Proceedings of the International Conference on Machine Learning, Atlanta, GA, USA, 16–21 June 2013; pp. 71–79. [Google Scholar]
  8. Guo, Z.; Yan, Y.; Yang, T. Revisiting SGD with increasingly weighted averaging: Optimization and generalization perspectives. arXiv 2020, arXiv:2003.04339v3. [Google Scholar]
  9. Tao, W.; Pan, Z.; Wu, G.; Tao, Q. Primal averaging: A new gradient evaluation step to attain the optimal individual convergence. IEEE Trans. Cybern. 2020, 50, 835–845. [Google Scholar] [CrossRef] [PubMed]
  10. Izmailov, P.; Podoprikhin, D.; Garipov, T.; Vetrov, D.; Wilson, A.G. Averaging weights leads to wider optima and better generalization. In Proceedings of the 34th Conference on Uncertainty in Artificial Intelligence (UAI-2018), Monterey, CA, USA, 6–10 August 2018; pp. 876–885. [Google Scholar]
  11. Khan, S.; Malik, M.A.; Togneri, R.; Bennamoun, M. A fractional gradient descent-based RBF neural network. Circuits Syst. Signal Process. 2018, 37, 5311–5332. [Google Scholar] [CrossRef]
  12. Bao, C.; Pu, Y.; Zhang, Y. Fractional-order deep back propagation neural Network. Comput. Intell. Neurosci. 2018, 2018, 7361628. [Google Scholar] [CrossRef] [PubMed]
  13. Wang, J.; Wen, Y.; Gou, Y.; Ye, Z.; Chen, H. Fractional-order gradient descent learning of BP neural networks with Caputo derivative. Neural Netw. 2017, 89, 19–30. [Google Scholar] [CrossRef] [PubMed]
  14. Chen, M.; Chen, B.; Zeng, G.; Lu, K.; Chu, P. An adaptive fractional-order BP neural network based on extremal optimization for handwritten digits recognition. Neurocomputing 2020, 391, 260–272. [Google Scholar] [CrossRef]
  15. Wei, Y.; Kang, Y.; Yin, W.; Wang, Y. Generalization of the gradient method with fractional order gradient direction. J. Frankl. Inst. 2020, 357, 2514–2532. [Google Scholar] [CrossRef]
  16. Du, Q.; Yang, J.; Zhou, Z. Time-fractional Allen-Cahn equations: Analysis and numerical methods. J. Sci. Comput. 2020, 42, 85. [Google Scholar] [CrossRef]
  17. Liao, H.L.; Tang, T.; Zhou, T. An energy stable and maximum bound preserving scheme with variable time steps for time fractional Allen-Cahn equation. SIAM J. Sci. Comput. 2021, 43, A3503–A3526. [Google Scholar] [CrossRef]
  18. Liu, H.; Cheng, A.; Wang, H.; Zhao, J. Time-fractional Allen-Cahn and Cahn-Hilliard phase-field models and their numerical investigation. Comput. Math. Appl. 2018, 76, 1876–1892. [Google Scholar] [CrossRef]
  19. Quan, C.; Tang, T.; Yang, J. How to define dissipation-preserving energy for timefractional phase-field equations. CSIAM Trans. Appl. Math. 2020, 1, 478–490. [Google Scholar] [CrossRef]
  20. Tang, T.; Yu, H.; Zhou, T. On energy dissipation theory and numerical stability for time-fractional phase-field equations. SIAM J. Sci. Comput. 2019, 41, A3757–A3778. [Google Scholar] [CrossRef]
  21. Rahman, Z.; Abdeljabbar, A.; Roshid, H.; Ali, M.Z. Novel precise solitary wave solutions of two time fractional nonlinear evolution models via the MSE scheme. Fractal Fract. 2022, 6, 444. [Google Scholar] [CrossRef]
  22. Abdeljabbar, A.; Roshid, H.; Aldurayhim, A. Bright, dark, and rogue wave soliton solutions of the quadratic nonlinear Klein-Gordon equation. Symmetry 2022, 14, 1223. [Google Scholar] [CrossRef]
  23. Alsaedi, A.; Ahmad, B.; Kirane, M. Maximum principle for certain generalized time and space-fractional diffusion equations. Quart. App. Math. 2015, 73, 163–175. [Google Scholar] [CrossRef]
Figure 1. Effects of fractional order parameter α , where lr = η denotes the learning rate.
Figure 1. Effects of fractional order parameter α , where lr = η denotes the learning rate.
Axioms 11 00507 g001
Figure 2. Comparison of loss effects for the learning rates η = 0.1 , 0.01 , and different fractional order parameter α , where QGD and TFGD stand for the general gradient descent and the time-fractional gradient descent, respectively.
Figure 2. Comparison of loss effects for the learning rates η = 0.1 , 0.01 , and different fractional order parameter α , where QGD and TFGD stand for the general gradient descent and the time-fractional gradient descent, respectively.
Axioms 11 00507 g002
Figure 3. Comparison of loss effects for the learning rates η = 0.1 and η = 0.01 , and the fractional order parameters α = 0.95 , 0.99 , where QGD and TFGD stand for the general gradient descent and the time-fractional gradient descent, respectively.
Figure 3. Comparison of loss effects for the learning rates η = 0.1 and η = 0.01 , and the fractional order parameters α = 0.95 , 0.99 , where QGD and TFGD stand for the general gradient descent and the time-fractional gradient descent, respectively.
Axioms 11 00507 g003
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Xie, J.; Li, S. Training Neural Networks by Time-Fractional Gradient Descent. Axioms 2022, 11, 507. https://doi.org/10.3390/axioms11100507

AMA Style

Xie J, Li S. Training Neural Networks by Time-Fractional Gradient Descent. Axioms. 2022; 11(10):507. https://doi.org/10.3390/axioms11100507

Chicago/Turabian Style

Xie, Jingyi, and Sirui Li. 2022. "Training Neural Networks by Time-Fractional Gradient Descent" Axioms 11, no. 10: 507. https://doi.org/10.3390/axioms11100507

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop