# Polynomial-Computable Representation of Neural Networks in Semantic Programming

^{*}

^{†}

*J*in 2022)

## Abstract

**:**

## 1. Introduction

## 2. Preliminaries

- (1)
- $nil$: a constant that selects an empty list;
- (2)
- $hea{d}^{\left(1\right)}$: returns the last element of the list or is $nil$ otherwise;
- (3)
- $tai{l}^{\left(1\right)}$: returns a list without the last element or is $nil$ otherwise;
- (4)
- $getElemen{t}^{\left(2\right)}$: returns the ith element of the list or is $nil$ otherwise;
- (5)
- $NumElement{s}^{{}^{\left(1\right)}}$: returns the number of elements in the list;
- (6)
- $firs{t}^{\left(1\right)}$: returns the first element of the list or is $nil$ otherwise;
- (7)
- $secon{d}^{\left(1\right)}$: returns the second element of the list or is $nil$ otherwise;
- (8)
- ${\in}^{\left(2\right)}$: the predicate “to be an element of a list”;
- (9)
- ${\subseteq}^{\left(2\right)}$: the predicate“to be an initial segment of a list”.

- $Iteratio{n}_{g,\phi}({t}_{1},{t}_{2})$ is an L-program, where $g,{t}_{1},{t}_{2}$ are L-programs and $\phi $ is an L-formula;
- $Cond({t}_{1},{\phi}_{1},\cdots ,{t}_{n},{\phi}_{n},{t}_{n+1})$ is an L-program, where ${t}_{1},\cdots ,{t}_{n+1}$ are L-programs and ${\phi}_{1},{\phi}_{n}$ are L-formulas;
- $F({t}_{1},\cdots ,{t}_{n})$ is an L-program, where $F\in \sigma $ and ${t}_{1},\cdots ,{t}_{n}$ are L-programs

- ${t}_{1}={t}_{2}$ is an L-formula, where ${t}_{1},{t}_{2}$ are L-programs;
- $P({t}_{1},\cdots ,{t}_{n})$ is an L-formula, where $P\in \sigma $ and ${t}_{1},\cdots ,{t}_{n}$ are L-programs;
- $\mathsf{\Phi}\&\mathsf{\Psi}$, $\mathsf{\Phi}\vee \mathsf{\Psi}$, $\mathsf{\Phi}\to \mathsf{\Psi}$, $\neg \mathsf{\Phi}$ are L-formulas, where $\mathsf{\Phi}$, $\mathsf{\Psi}$ are L-formulas
- $\exists x\delta t\mathsf{\Phi}$, $\forall x\delta t\mathsf{\Phi}$ are L-formulas, where t is an L-program, $\mathsf{\Phi}$ is an L-formula and $\delta \in \{\in ,\subseteq ,\le \}$.

**Theorem**

**1**

- (1)
- Any L-program has polynomial computational complexity.
- (2)
- For any p-computable function, there is a suitable L-program that implements it.

- (1)
- $|{w}_{1}^{*}|\le |{w}_{1}|+C\xb7{\sum}_{i=2}^{n}{\left|{w}_{i}\right|}^{p}$;
- (2)
- $|{w}_{i}^{*}|\le |{w}_{i}|$, for all $i\in [2,\cdots ,n]$.

## 3. Neural Networks

**Remark**

**1.**

**Remark**

**2.**

**Remark**

**3.**

**Remark**

**4.**

**Lemma**

**2.**

**Proof.**

**Lemma**

**3.**

**Proof.**

**Lemma**

**4.**

**Proof.**

**Lemma**

**5.**

**Proof.**

**Lemma**

**6.**

**Proof.**

**Theorem**

**2.**

## 4. Materials and Methods

## 5. Results

## 6. Discussion

## 7. Conclusions

## Author Contributions

## Funding

## Institutional Review Board Statement

## Informed Consent Statement

## Data Availability Statement

## Conflicts of Interest

## References

- Ershov, Y.L.; Goncharov, S.S.; Sviridenko, D.I. Semantic programming. Inf. Process.
**1986**, 86, 1113–1120. [Google Scholar] - Ershov, Y.L. Definability and Computability; Springer: New York, NY, USA, 1996. [Google Scholar]
- Michaelson, G. Programming Paradigms, Turing Completeness and Computational Thinking. Art Sci. Eng. Program.
**2020**, 4, 4. [Google Scholar] [CrossRef] [PubMed] - Goncharov, S. Conditional terms in semantic programming. Sib. Math. J.
**2017**, 58, 794–800. [Google Scholar] [CrossRef] - Ospichev, S.; Ponomarev, D. On the complexity of formulas in semantic programming. Semr
**2018**, 15, 987–995. [Google Scholar] [CrossRef] - Goncharov, S.S.; Nechesov, A.V. Solution of the Problem P = L. Mathematics
**2022**, 10, 113. [Google Scholar] [CrossRef] - Bathaee, Y. The artificial intelligence black box and the failure of intent and causation. Harv. J. Law Technol.
**2018**, 31, 889–938. [Google Scholar] - Sarker, I.H. Machine Learning: Algorithms, Real-World Applications and Research Directions. SN Comput. Sci.
**2021**, 2, 160. [Google Scholar] [CrossRef] [PubMed] - Nechesov, A.V. Some Questions on Polynomially Computable Representations for Generating Grammars and Backus–Naur Forms. Sib. Adv. Math.
**2022**, 32, 299–309. [Google Scholar] [CrossRef] - Leshno, M.; Lin, V.; Pinkuss, A.; Schocken, S. Multilayer feedforward networks with a nonpolynomial activation function can approximate any function. Neural Netw.
**1993**, 6, 861–867. [Google Scholar] [CrossRef][Green Version] - Hagan, M.T.; Demuth, H.B.; Beale, M.H.; Jesus, O.D. Neural Network Design; Martin Hagan: San Diego, CA, USA, 2014. [Google Scholar]
- Russel, S.J.; Norvig, P. Artificial Intelligence—A Modern Approach, 3rd ed.; Prentice Hall: Hoboken, NJ, USA, 2010. [Google Scholar]
- Goncharov, S.; Nechesov, A. Polynomial Analogue of Gandy’s Fixed Point Theorem. Mathematics
**2021**, 9, 2102. [Google Scholar] [CrossRef] - Ahamed, I.; Akthar, S. A Study on Neural Network Architectures. Comput. Eng. Intell. Syst.
**2016**, 7, 1–7. [Google Scholar] - Wilamowski, B. Neural network architectures and learning algorithms. IEEE Ind. Electron. Mag.
**2009**, 3, 56–63. [Google Scholar] [CrossRef] - Temurtas, F.; Gulbag, A.; Yumusak, N. A Study on Neural Networks Using Taylor Series Expansion of Sigmoid Activation Function. In Computational Science and Its Applications—ICCSA 2004; Lecture Notes in Computer Science; Springer: Berlin/Heidelberg, Germany, 2004; Volume 3046. [Google Scholar] [CrossRef]
- Rumelhurt, D.; Hinton, G.; Williams, R. Learning representation by back-propagating errors. Nature
**1986**, 323, 533–536. [Google Scholar] [CrossRef] - Backpropagation. Available online: https://en.wikipedia.org/wiki/Backpropagation (accessed on 10 November 2022).
- Schmidt, R. Recurrent Neural Networks (RNNs): A gentle Introduction and Overview. arXiv
**2019**, arXiv:1912.05911. [Google Scholar] - Auda, G.; Kamel, M.; Raafat, H. Modular neural network architectures for classification. In Proceedings of the International Conference on Neural Networks, Washington, DC, USA, 3–6 June 1996; Volume 2, pp. 1279–1284. [Google Scholar] [CrossRef]
- Nechesov, A.V.; Safarov, R.A. Web 3.0 and smart cities. In Proceedings of the International Conference “Current State and Development Perspectives of Digital Technologies and Artificial Intelligence”, Samarkand, Uzbekistan, 27–28 October 2022. [Google Scholar]

Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |

© 2023 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).

## Share and Cite

**MDPI and ACS Style**

Goncharov, S.; Nechesov, A.
Polynomial-Computable Representation of Neural Networks in Semantic Programming. *J* **2023**, *6*, 48-57.
https://doi.org/10.3390/j6010004

**AMA Style**

Goncharov S, Nechesov A.
Polynomial-Computable Representation of Neural Networks in Semantic Programming. *J*. 2023; 6(1):48-57.
https://doi.org/10.3390/j6010004

**Chicago/Turabian Style**

Goncharov, Sergey, and Andrey Nechesov.
2023. "Polynomial-Computable Representation of Neural Networks in Semantic Programming" *J* 6, no. 1: 48-57.
https://doi.org/10.3390/j6010004