Next Article in Journal
Linking Error in the 2PL Model
Next Article in Special Issue
Reducing the Immunogenicity of Pulchellin A-Chain, Ribosome-Inactivating Protein Type 2, by Computational Protein Engineering for Potential New Immunotoxins
Previous Article in Journal
Diabot: Development of a Diabetic Foot Pressure Tracking Device
Previous Article in Special Issue
Evaluation of Interference Analysis from 5G NR Networks to Aeronautical and Maritime Mobile Systems in the Frequency Band 4800–4990 MHz
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Polynomial-Computable Representation of Neural Networks in Semantic Programming

Sobolev Institute of Mathematics, Academician Koptyug Ave., 4, 630090 Novosibirsk, Russia
*
Authors to whom correspondence should be addressed.
These authors contributed equally to this work.
J 2023, 6(1), 48-57; https://doi.org/10.3390/j6010004
Submission received: 17 November 2022 / Revised: 27 December 2022 / Accepted: 4 January 2023 / Published: 6 January 2023
(This article belongs to the Special Issue Feature Paper of J in 2022)

Abstract

:
A lot of libraries for neural networks are written for Turing-complete programming languages such as Python, C++, PHP, and Java. However, at the moment, there are no suitable libraries implemented for a p-complete logical programming language L. This paper investigates the issues of polynomial-computable representation neural networks for this language, where the basic elements are hereditarily finite list elements, and programs are defined using special terms and formulas of mathematical logic. Such a representation has been shown to exist for multilayer feedforward fully connected neural networks with sigmoidal activation functions. To prove this fact, special p-iterative terms are constructed that simulate the operation of a neural network. This result plays an important role in the application of the p-complete logical programming language L to artificial intelligence algorithms.

1. Introduction

The conception of semantic programming [1] was developed in the 1970s and 1980s. The main objective of this direction was to create a logical programming language in which programs were given by the basic constructions of mathematical logic such as formulas and terms. A hereditarily finite list superstructure H W ( M ) of the signature σ was chosen as the virtual execution device. At first, the programs were Σ and Δ 0 -formulas [2], but this language was Turing-complete [3].
In 2017, Goncharov proposed considering the programs as the terms and presented the conception of conditional terms [4]. In the new L 1 language, L 1 -programs and L 1 -formulas were inductively specified through the standard terms and Δ 0 -formulas of the signature σ . The polynomial-computable (p-computable) model H W ( M ) of the signature σ was chosen as a virtual execution device. The authors of [5] showed that any L 1 -program and L 1 -formula has a polynomial computational complexity. The question arises as to whether all algorithms of polynomial computational complexity can be represented in this language or in its polynomial-computable extensions. This has been an open problem for several years.
Only in 2021 did Goncharov and Nechesov propose the conception of the p-iterative terms [6]. Extension of the L 1 language with p-iterative terms leads to the language L. In [6], it is shown that the class of all L-programs coincides with the class of all algorithms of a polynomial computational complexity P.
The main area of application of semantic programming is artificial intelligence. In artificial intelligence, the black box problem [7] has been around for a long time. Most often, AI gives a result without explaining it. It does not explicitly share how and why it reaches its conclusions. One of the main directions in AI is machine learning [8]. Most machine learning algorithms implemented using neural networks give a result but do not explain it. Our work is the first step toward the separation of artificial intelligence algorithms in the stage where a result can be logically explained and the stage where a result is given without explanation.
The main objective of this paper is to build a polynomial-computable representation [9] for multilayer feedforward fully connected neural networks [10,11] using the basic elements of the model H W ( M ) . This type of neural network plays a great role in many artificial intelligence algorithms [12], and such a library for them in the p-complete logical language L is necessary.

2. Preliminaries

The paper uses the results of the theory of semantic programming [13], which is based on a polynomial-computable hereditarily finite list superstructure H W ( M ) of a finite signature σ . The main set of H W ( M ) consists of the hereditarily finite list elements. The signature σ consists of the next constant, operations and predicates:
(1)
n i l : a constant that selects an empty list;
(2)
h e a d ( 1 ) : returns the last element of the list or is n i l otherwise;
(3)
t a i l ( 1 ) : returns a list without the last element or is n i l otherwise;
(4)
g e t E l e m e n t ( 2 ) : returns the ith element of the list or is n i l otherwise;
(5)
N u m E l e m e n t s ( 1 ) : returns the number of elements in the list;
(6)
f i r s t ( 1 ) : returns the first element of the list or is n i l otherwise;
(7)
s e c o n d ( 1 ) : returns the second element of the list or is n i l otherwise;
(8)
( 2 ) : the predicate “to be an element of a list”;
(9)
( 2 ) : the predicate“to be an initial segment of a list”.
Let us define the conception of L 0 -formulas and L 0 -programs in the basic language L 0 as Δ 0 -formulas and standard terms of the signature σ , respectively.
The language L 1 is defined as an inductive extention of the basic language L 0 with conditional terms of the following form:
C o n d ( t 1 , φ 1 , , t n + 1 ) = t 0 , if H W ( M ) φ 0 t 1 , if H W ( M ) φ 1 & ¬ φ 0 t n , if H W ( M ) φ n & ¬ φ 0 & & ¬ φ n 1 t n + 1 , otherwise
The conceptions of the L-formula, L-program and p-iterative term are defined as follows.
Basis of induction: Any L 1 -program is an L-program, and any L 1 -formula is an L-formula.
Induction step: Let g ( x ) be an L-program and φ ( x ) be an L-formula, where there is a constant C g such that for any w, the following inequality is true:
| g ( w ) | | w | + C g
The notation g i ( x ) means the L-program g is applied i times:
g i ( x ) = g ( g i 1 ( x ) ) where g 0 ( x ) = x
Suppose that there is a constant C g such that for any w H W ( M ) , the following is true:
| g ( w ) | | w | + C g
The notation of the p-iterative term [6] is defined as follows:
I t e r a t i o n g , φ ( w , n ) = g i ( w ) , if i n H W ( M ) φ ( g i ( w ) ) and j < i H W ( M ) φ ( g j ( w ) ) f a l s e , otherwise
The L-program definitions in the inductive step are as follows:
  • I t e r a t i o n g , φ ( t 1 , t 2 ) is an L-program, where g , t 1 , t 2 are L-programs and φ is an L-formula;
  • C o n d ( t 1 , φ 1 , , t n , φ n , t n + 1 ) is an L-program, where t 1 , , t n + 1 are L-programs and φ 1 , φ n are L-formulas;
  • F ( t 1 , , t n ) is an L-program, where F σ and t 1 , , t n are L-programs
The L-formula definitions in the inductive step are as follows:
  • t 1 = t 2 is an L-formula, where t 1 , t 2 are L-programs;
  • P ( t 1 , , t n ) is an L-formula, where P σ and t 1 , , t n are L-programs;
  • Φ & Ψ , Φ Ψ , Φ Ψ , ¬ Φ are L-formulas, where Φ , Ψ are L-formulas
  • x δ t Φ , x δ t Φ are L-formulas, where t is an L-program, Φ is an L-formula and δ { , , } .
Theorem 1
((Solution to the problem P = L) [6]). Let H W ( M ) be a p-computable hereditarily finite list superstructure H W ( M ) of the finite signature σ. Then, the following are true:
(1) 
Any L-program has polynomial computational complexity.
(2) 
For any p-computable function, there is a suitable L-program that implements it.
In the current work, we modify the conception of the p-iterative term, and instead of the inequality | g ( x ) | | x | + C g , we require fulfillment of some conditions for an L-program g and L-formula φ .
Suppose the L-program g ( x ) for a fixed n N and some polynomial h ( x ) are defined as follows:
g ( w ) = w * , if w = < w 1 , , w n > f a l s e , otherwise
where w * = < w 1 * , , w n * > and the following conditions are fulfilled (up to a permutation):
(1)
| w 1 * | | w 1 | + C · i = 2 n | w i | p ;
(2)
| w i * | | w i | , for all i [ 2 , , n ] .
The next lemma almost completely repeats the proof of Theorem 1 from [6]. The same length and complexity estimates are used:
Lemma 1.
The term I t e r a t i o n g , φ from Equation (3) with conditions (1–2) from Equation (4) on g is a p-computable function.

3. Neural Networks

This work will consider multilayer feedforward fully connected neural networks of the type in [10] (see also [14,15]), which has one incoming layer, one output layer and several hidden layers. For simplicity of presentation, we will assume that there are no bias neurons in such neural networks. However, all results of this work for neural networks with bias neurons are also valid.
For example, such a neural network with one hidden layer has the following form:
J 06 00004 i001
By default, for all neurons of the neural network, the activation function will be a sigmoid of the form:
S i g ( x ) = 1 1 + e x
The derivative of this function has the form:
S i g ( x ) = S i g ( x ) · ( 1 S i g ( x ) )
Consider the approximation [16] for the function e x as the Taylor series expansion up to nine terms of the form:
e x = 1 x + x 2 2 ! x 3 3 ! + + ( 1 ) n x n n !
Denote the function f as the sigmoid approximation, which uses the Taylor series from Equation (5) for e x . In addition, denote f ( x ) = f ( x ) ( 1 f ( x ) ) as an approximation for a derivative function S i g ( x ) :
Remark 1.
f and f are p-computable functions.
Let c ( x , y ) : N × N N be a standard pair numbering function:
c ( x , y ) = ( x + y + 1 ) · ( x + y ) 2 + y , where x , y N
Let us define the notation for the mth neuron in kth layer of the neural network N as n c ( k , m ) . Each neuron has the following list representation:
n i ̲ : < f ̲ , < w i j 1 , , w i j k > >
where f ̲ is a constant symbol in a signature σ for an activation function and w i j are the weights of the synapses which occur from neuron n i to neuron n j .
Neuron n i of the output layer has the form:
n i ̲ : < f ̲ , < > >
Let us define the p-computable predicate N e u r o n which selects the list encodings of the neurons. A characteristic function for this predicate is defined as follows:
C o n d ( 1 , Φ ( x ) , 0 )
where C o n d is a condition term from Equation (1) and Φ ( x ) has the following form:
Φ ( x ) : ( f i r s t ( x ) = f ̲ ) & ( N u m E l e m e n t s ( x ) = 2 ) & & ( ( s e c o n d ( x ) = n i l ) ( l s e c o n d ( x ) N u m b e r ( l ) )
For some layer with a number i for our neural network, we can define the following code:
s i ̲ : < n c ( i , 1 ) ̲ , , n c ( i , n i ) ̲ >
Let us define the p-computable predicate L a y e r which selects a layer. A characteristic function for this predicate is defined as follows:
C o n d ( 1 , Φ ( x ) , 0 )
where Φ ( x ) has the form:
Φ ( x ) : l x N e u r o n ( l )
Then, the following code is the list encoding for neural network N i :
N i ̲ : < s 1 ̲ , , s k ̲ >
Let us define the p-computable predicate N e u r a l N e t w o r k which selects a list encoding of the neural network. A characteristic function for this predicate is defined as follows:
C o n d ( 1 , Φ ( x ) , 0 ) ;
where Φ ( x ) has the form:
Φ ( x ) : l x L a y e r ( l ) & ( i N u m E l e m e n t s ( x ) 1 w g e t E l e m e n t ( x , i ) N u m E l e m e n t s ( s e c o n d ( w ) ) = N u m E l e m e n t s ( g e t E l e m e n t ( x , i + 1 ) ) ) & & ( w h e a d ( x ) s e c o n d ( w ) = n i l )
Remark 2.
Any neural network N is uniquely restored from the list encoding N ̲ .
Consider the p-computable hereditarily finite list superstructure H W ( M ) of the signature
σ * = σ { N e u r a l N e t w o r k ( 1 ) , L a y e r ( 1 ) , N e u r o n ( 1 ) }
where neural networks are encoded as elements in H W ( M ) .
Let a be the number. Then, we define a function × as follows:
a × l = < a · b 1 , , a · b k > , where l = < b 1 , , a · b k > and all   b i   are numbers f a l s e , otherwise
Remark 3.
× is a p-computable function.
Let us define the function ⊎ as follows:
l w = < a 1 + b 1 , , a k + b k > , if l = < a 1 , , a k > , w = < b 1 , , b k >   and all   a i , b i   are members f a l s e , otherwise
Remark 4.
is a p-computable function.
Let us define the operation ⊗ as follows:
< a 1 , , a m i > s i ̲ = < b 1 , , b m i + 1 > , if s i is non - output layer < c 1 , , c m i > , if s i is output layer f a l s e , otherwise
where
b m = j = 1 m i f ( a j ) · G e t E l e m e n t ( s e c o n d ( n c ( i , j ) ̲ ) , m ) , m [ 1 , , m i + 1 ]
and where
n c ( i , j ) ̲ : G e t E l e m e n t ( s i ̲ , j )
and
c m = f ( a m ) , m [ 1 , , m i ]
Lemma 2.
is a p-computable function.
Proof. 
We prove this lemma using the construction of the p-iterative term I t e r a t i o n g , φ . Let us define the operation of the L-program g for the non-output layer s i as follows:
g ( < < a 1 , , a m i > , s i ̲ , < > > ) = = < < a 1 , , a m i 1 > , t a i l ( s i ̲ ) , < b 1 ( 1 ) , , b m i + 1 ( 1 ) > >
where
< b 1 ( 1 ) , , b m i + 1 ( 1 ) > = f ( a m i ) × s e c o n d ( h e a d ( s i ̲ ) )
and on the jth step, we have
g ( < < a 1 , , a m i j > , t a i l j ( s i ̲ ) , < b 1 ( j ) , , b m i + 1 ( j ) > > ) = = < < a 1 , , a m i j 1 > , t a i l j + 1 ( s i ̲ ) , < b 1 ( j + 1 ) , , b m i + 1 ( j + 1 ) > >
where the notation t a i l j means that the list function t a i l is applied j times:
< b 1 ( j + 1 ) , , b m i + 1 ( j + 1 ) > = < b 1 ( j ) , , b m i + 1 ( j ) > f ( a j ) × s e c o n d ( h e a d ( t a i l j ( s i ̲ ) ) )
Using Remarks 3 and 4, we find that g is a p-computable function.
The L-formula φ is defined as follows:
φ : s e c o n d ( x ) = n i l
Conditions (1–2) from Equation (4) for the p-iterative term I t e r a t i o n g , φ are met, and therefore, by Lemma 1 the operation ⊗ is a p-computable function where s i is non-output layer.
If s k is an output layer, then
g ( < < a 1 , , a m k > , s k ̲ > ) = < f ( a 1 ) , , f ( a m k ) >
Let us define the operation W as follows:
W ( < a 1 , , a m 1 > , N ̲ ) = < o 1 , , o m k >
where N ̲ = < s 1 ̲ , , s k ̲ > , a i is the incoming signals and o i is the outputs of the output layer neurons:
Lemma 3.
W is a p-computable function.
Proof. 
We prove this fact by construction of a p-iterative term I t e r a t i o n g , φ . The function g is defined as follows:
g ( < < a 1 , , a m i > , < s i ̲ , , s k ̲ > > ) = < < a 1 , , a m i > s i ̲ , < s i + 1 ̲ , , s k ̲ > >
The construction g implies that g is a p-computable function.
The L-formula φ is defined as h e a d ( x ) = n i l .
Conditions (1–2) from Equation (4) for the p-iterative terms I t e r a t i o n g , φ are met, and therefore, by Lemma 1, the function W is a p-computable function. □
Let us define the function O as follows:
O ( a ̲ , < s 1 ̲ , , s k ̲ > ) = < s o 1 ̲ , , s o k ̲ >
where s o i ̲ = < o c ( i , 1 ) , , o c ( i , m i ) > represents the neuron outputs for the ith layer:
Lemma 4.
O is a p-computable function.
Proof. 
The proof almost repeats the proof of Lemma 3 □
The backpropagation [17,18] algorithm will be used to configure the neural network. First, it is necessary to find the coefficients as follows:
δ j = ( o j t j ) · o j · ( 1 o j ) , if j is an output neuron ( l L w j l δ l ) · o j · ( 1 o j ) , if j is an inner neuron
where t j is a target output for the jth neuron of the output layer.
The corrective weights are defined by the formula:
Δ w i j = η · o i · δ j
where η is a learning rate (some fixed constant ordinary from [ 0 , 1 ] ).
Let a ̲ = < a 1 , , a m 1 > be an input and s o i ̲ be the neuron outputs for the ith layer.
Let us define the function Δ as follows:
Δ ( a ̲ , t ̲ , < s 1 ̲ , , s k ̲ > ) = < s δ 1 ̲ , , s δ k ̲ >
where s δ i ̲ = < δ c ( i , 1 ) , , δ c ( i , m i ) > :
Lemma 5.
Δ is a p-computable function.
Proof. 
We build the p-iterative term I t e r a t i o n g , φ , which simulates the operation of a function Δ as follows:
g ( < < > , < s o 1 ̲ , , s o k ̲ > , < s 1 ̲ , , s k ̲ > , t ̲ > ) = = < < s δ k ̲ > , < s o 1 ̲ , , s o k ̲ > , < s 1 ̲ , , s k 1 ̲ > , < > >
where
δ c ( k , j ) = ( o c ( k , j ) t c ( k , j ) ) o c ( k , j ) ( 1 o c ( k , j ) ) , j [ 1 , , m k ]
g ( < < s δ j + 1 ̲ , , s δ k ̲ > , < s o 1 ̲ , , s o j + 1 ̲ > , < s 1 ̲ , , s j ̲ > , < > > ) = = < < s δ j ̲ , , s δ k ̲ > , < s o 1 ̲ , , s o j ̲ > , < s 1 ̲ , , s j 1 ̲ > , < > >
where
δ c ( j , i ) = ( l L w c ( j , i ) c ( j + 1 , l ) δ c ( j + 1 , l ) ) o c ( j , i ) ( 1 o c ( j , i ) )
The L-formula φ is defined as follows:
φ : h e a d ( t a i l ( x ) ) = n i l
Conditions (1–2) from Equation (4) for the p-iterative term I t e r a t i o n g , φ are met, and therefore, by Lemma 1 the operation Δ is a p-computable function. □
When all the parameters for weight correction are found, the weights can be changed using the formula:
w i j * = w i j + Δ w i j = w i j η · o i · δ j
Let us define the function T as follows:
T ( < < s 1 ̲ , , s k ̲ > , < s o 1 ̲ , , s o k ̲ > , < s δ 1 ̲ , , s δ k ̲ > > ) = < s 1 ̲ * , , s k ̲ * >
where the weights of the neurons n i * ̲ s m ̲ * are derived from the weights of the neurons n i ̲ s m ̲ by adding Δ w i j , where j [ 1 , , m i + 1 ] :
Lemma 6.
T is a p-computable function.
Proof. 
The proof of this lemma is achieved by constructing a suitable p-iterative term I t e r a t i o n g , φ as well as Lemma 5. □
The following theorem statement follows automatically from Remarks 3 and 4 and Lemmas 2–6:
Theorem 2.
Let H W ( M ) of the signature σ be a p-computable model.
Then, H W ( M ) of the signature σ { N e u r a l N e t w o r k , L a y e r , N e u r o n } { × , , , W , O , Δ , T } is a p-computable model.

4. Materials and Methods

This paper used the main tools and methods of semantic programming such as p-iteration terms, conditional terms and a p-computable hereditarily finite list superstructure H W ( M ) of the signature σ . These techniques allowed us to encode the elements of the multilayer feedforward fully connected neural networks with a sigmoidal activation function using hereditarily finite lists so that the p-iterative term satisfied conditions (1–2) from Equation (4) simulated the operation of the neural network itself. Moreover, the construction of the p-iterative term guarantees polynomial computational complexity of the neural network operation, as well as the backpropagation algorithm.

5. Results

The main result of this work is the construction of a polynomial-computable representation for a multilayer feedforward fully connected neural network with the sigmoidal activation function for a p-complete logical programming language L. This result is equivalent to the statement of the Theorem 2 and allows the introduction and use of neural networks as a basic library of L.

6. Discussion

A polynomial-computable representation of neural networks in the p-complete logical programming language L is extremely important in practice. This allows artificial intelligence algorithms to be implemented without the problems that arise in Turing-complete languages. One such problem is the halting problem.
An open question is the existence of polynomial-computable representations for other types of neural networks, such as recurrent neural networks [19], and modular neural networks [20]. Most often, the answer to the existence of such representations for neural networks in a general case will be negative. Is it possible to find such limitations for such a representation should exist?
Moreover, we are interested not only in the existence of such representations but also constructive implementations within the framework of the theory of semantic programming and p-complete language L.

7. Conclusions

This paper shows a constructive method for polynomial-computable representation of neural networks in the p-complete logical programming language L. Now, we can add a library for neural networks to the p-complete language L, which will allow us to implement algorithms both using a logical language and neural networks. Neural networks play a great role in any direction of AI, so this p-computable representation will help expand the expressive power of our core language L.
Since accuracy, speed and a logical explanation of the result are important criteria for the implementation of artificial intelligence algorithms, the given p-complete logical programming language L solves artificial intelligence problems much better than a similar implementation in Turing-complete languages. Moreover, in this language, there is no halting problem, which is very important for the stability and reliability of software solutions.
The main applications of this result are blockchains, smart contracts, robotics and artificial intelligence. Moreover, these results will be applied in the conception of building smart cities [21], where almost all possible options for artificial intelligence are involved: image recognition, data mining, machine learning, neural networks, smart contracts, robotics, finance based on cryptocurrencies and blockchains.

Author Contributions

Conceptualization, S.G. and A.N.; methodology, S.G. and A.N.; formal analysis, S.G.; validation, S.G.; investigation, S.G. and A.N.; writing—original draft preparation, A.N.; writing—review and editing, A.N.; supervision, S.G.; project administration, S.G.; software, A.N. All authors have read and agreed to the published version of the manuscript.

Funding

This work was performed within the state task of the Sobolev Institute of Mathematics (Project No. FWNF-2022-0011).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Ershov, Y.L.; Goncharov, S.S.; Sviridenko, D.I. Semantic programming. Inf. Process. 1986, 86, 1113–1120. [Google Scholar]
  2. Ershov, Y.L. Definability and Computability; Springer: New York, NY, USA, 1996. [Google Scholar]
  3. Michaelson, G. Programming Paradigms, Turing Completeness and Computational Thinking. Art Sci. Eng. Program. 2020, 4, 4. [Google Scholar] [CrossRef] [PubMed]
  4. Goncharov, S. Conditional terms in semantic programming. Sib. Math. J. 2017, 58, 794–800. [Google Scholar] [CrossRef]
  5. Ospichev, S.; Ponomarev, D. On the complexity of formulas in semantic programming. Semr 2018, 15, 987–995. [Google Scholar] [CrossRef]
  6. Goncharov, S.S.; Nechesov, A.V. Solution of the Problem P = L. Mathematics 2022, 10, 113. [Google Scholar] [CrossRef]
  7. Bathaee, Y. The artificial intelligence black box and the failure of intent and causation. Harv. J. Law Technol. 2018, 31, 889–938. [Google Scholar]
  8. Sarker, I.H. Machine Learning: Algorithms, Real-World Applications and Research Directions. SN Comput. Sci. 2021, 2, 160. [Google Scholar] [CrossRef] [PubMed]
  9. Nechesov, A.V. Some Questions on Polynomially Computable Representations for Generating Grammars and Backus–Naur Forms. Sib. Adv. Math. 2022, 32, 299–309. [Google Scholar] [CrossRef]
  10. Leshno, M.; Lin, V.; Pinkuss, A.; Schocken, S. Multilayer feedforward networks with a nonpolynomial activation function can approximate any function. Neural Netw. 1993, 6, 861–867. [Google Scholar] [CrossRef] [Green Version]
  11. Hagan, M.T.; Demuth, H.B.; Beale, M.H.; Jesus, O.D. Neural Network Design; Martin Hagan: San Diego, CA, USA, 2014. [Google Scholar]
  12. Russel, S.J.; Norvig, P. Artificial Intelligence—A Modern Approach, 3rd ed.; Prentice Hall: Hoboken, NJ, USA, 2010. [Google Scholar]
  13. Goncharov, S.; Nechesov, A. Polynomial Analogue of Gandy’s Fixed Point Theorem. Mathematics 2021, 9, 2102. [Google Scholar] [CrossRef]
  14. Ahamed, I.; Akthar, S. A Study on Neural Network Architectures. Comput. Eng. Intell. Syst. 2016, 7, 1–7. [Google Scholar]
  15. Wilamowski, B. Neural network architectures and learning algorithms. IEEE Ind. Electron. Mag. 2009, 3, 56–63. [Google Scholar] [CrossRef]
  16. Temurtas, F.; Gulbag, A.; Yumusak, N. A Study on Neural Networks Using Taylor Series Expansion of Sigmoid Activation Function. In Computational Science and Its Applications—ICCSA 2004; Lecture Notes in Computer Science; Springer: Berlin/Heidelberg, Germany, 2004; Volume 3046. [Google Scholar] [CrossRef]
  17. Rumelhurt, D.; Hinton, G.; Williams, R. Learning representation by back-propagating errors. Nature 1986, 323, 533–536. [Google Scholar] [CrossRef]
  18. Backpropagation. Available online: https://en.wikipedia.org/wiki/Backpropagation (accessed on 10 November 2022).
  19. Schmidt, R. Recurrent Neural Networks (RNNs): A gentle Introduction and Overview. arXiv 2019, arXiv:1912.05911. [Google Scholar]
  20. Auda, G.; Kamel, M.; Raafat, H. Modular neural network architectures for classification. In Proceedings of the International Conference on Neural Networks, Washington, DC, USA, 3–6 June 1996; Volume 2, pp. 1279–1284. [Google Scholar] [CrossRef]
  21. Nechesov, A.V.; Safarov, R.A. Web 3.0 and smart cities. In Proceedings of the International Conference “Current State and Development Perspectives of Digital Technologies and Artificial Intelligence”, Samarkand, Uzbekistan, 27–28 October 2022. [Google Scholar]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Goncharov, S.; Nechesov, A. Polynomial-Computable Representation of Neural Networks in Semantic Programming. J 2023, 6, 48-57. https://doi.org/10.3390/j6010004

AMA Style

Goncharov S, Nechesov A. Polynomial-Computable Representation of Neural Networks in Semantic Programming. J. 2023; 6(1):48-57. https://doi.org/10.3390/j6010004

Chicago/Turabian Style

Goncharov, Sergey, and Andrey Nechesov. 2023. "Polynomial-Computable Representation of Neural Networks in Semantic Programming" J 6, no. 1: 48-57. https://doi.org/10.3390/j6010004

Article Metrics

Back to TopTop