# An Efficient Method for Pricing Analysis Based on Neural Networks

^{1}

^{2}

^{3}

^{4}

^{*}

## Abstract

**:**

## 1. Introduction

## 2. Deep Autoencoder

**W**,b) (x) ≈ x is trained to make the input and output as similar as possible. The hidden layer added after training is also called an encoder, and the output of the encoder is the result of encoding the input data. The output layer is called the decoder, which can decode the encoded data, but usually does not pay attention to the output of the decoder. The working principle of an AE is expressed as follows:

## 3. Proposed Model

#### 3.1. Dimensionality Reduction of Stock Data Based on DAE

- (1)
- First construct multiple RBMs, as shown in Figure 3a, and pre-train the models separately using a layer-by-layer greedy training strategy. Among them,
**X**is the original data; each RBM uses the output of the previous RBM as input, and**W**is the pre-trained weight. - (2)
- Stack the pre-trained RBM layer by layer to build a symmetric model as shown in Figure 3b. The first to m-th layers of the model are called encoders, and each layer of the encoder uses the corresponding
**W**as weight. The m + 1 layer to the 2 m layer of the model are called decoders, and each layer of the decoder uses the corresponding**W**^{T}as the weight. - (3)
- Use the BP algorithm to fine-tune the model and update the weight to
**W**+ e to make the final output $\tilde{\mathit{X}}$ of the model as similar as possible to the input**X**; the output of the m-th layer is the coding result. The following describes the RBM-based DAE training process in detail.

**X**is input to the DAE, and the error back propagation algorithm is used to fine-tune the network to minimize the error of the output decoded vectors $\tilde{X}$ and

**X**. After fine-tuning, the weight in the DAE network reaches the optimal value, and the output of the m-th layer encoder in the middle of the network is a share-coded sequence

**X**’ after dimensionality reduction of ticket data.

#### 3.2. Stock Prediction Based on BP Neural Network

#### 3.3. Algorithm Description

Algorithm 1 Proposed algorithm for stock price prediction |

1: For num in layer 2: Initialize RBM (num) weight W and b offset value3: Enter stock data X4: for I in epoch1 5: Use the method in Section 3.1 to train RBM(num) 6: W.append(W), b.append(b)7: Return W, b8: Use the trained RBM to construct a DAE with a symmetrical structure 9: Load W and b into the corresponding DAE network10: for i in epoch2 11: Input stock data X to DAE network to fine-tune network parameters12: End 13: Input stock data X to the trained DAE network14: Obtain the encoding result X′ of the intermediate layer encoder15: Divide X′ into training and test set16: Build BP neural network 17: For i in epoch3 18: Input the training set to train the BPNN 19: End 20: The BP neural network uses the test set to predict the stock price Y′. |

## 4. Experimental Results

#### 4.1. Data Set

#### 4.2. Comparison of DAE Dimensionality Reduction Effects of Different Depths

**X**.

#### 4.3. Comparison with Other Dimensionality Reduction Methods

_{2}in Figure 6c), which can be inferred because the effect of dimensionality reduction using FA and DAE data forecast is better.

#### 4.4. Comparison with Different Forecasting Methods

## 5. Conclusions

## Author Contributions

## Funding

## Conflicts of Interest

## References

- Araki, Shoko, Tomoki Hayashi, Marc Delcroix, Masakiyo Fujimoto, Kazuya Takeda, and Tomohiro Nakatani. 2015. Exploring multichannel features for denoising-autoencoder-based speech enhancement. Paper presented at IEEE International Conference on Acoustics, Speech and Signal Processing, South Brisbane, QLD, Australia, April 19–24; pp. 116–20. [Google Scholar]
- Babić, Karlo, Sanda Martinčić-Ipšić, and Ana Meštrović. 2020. Survey of neural text representation models. Information 11: 511. [Google Scholar] [CrossRef]
- Bakshi, Bhavik. 2010. Multiscale PCA with application to multivariate statistical process monitoring. Aiche Journal 44: 1596–610. [Google Scholar] [CrossRef]
- Brandt, Andreas, and Manfred Brandt. 2004. On the two-class M/M/1 system under preemptive resume and impatience of the prioritized customers. Queuing Systems 47: 147–68. [Google Scholar] [CrossRef]
- Brandt, Andreas, and Manfred Brandt. 2013. Workload and busy period for M/GI/1 with a general impatience mechanism. Queuing Systems 75: 189–209. [Google Scholar] [CrossRef]
- Chopra, Ritika, and Gagan Deep Sharma. 2021. Application of artificial intelligence in stock market forecasting: A critique, review, and research agenda. Journal of Risk and Financial Management 14: 526. [Google Scholar] [CrossRef]
- Craighead, Christopher W., Kirk R. Karwan, and Janis L. Miller. 2009. The effects of severity of failure and customer loyalty on service recovery strategies. Production and Operations Management 13: 307–21. [Google Scholar] [CrossRef]
- Gao, Shenghua, Yuting Zhang, Kui Jia, Jiwen Lu, and Yingying Zhang. 2015. Single sample face recognition via learning deep supervised autoencoders. IEEE Transactions on Information Forensics and Security 10: 2108–18. [Google Scholar] [CrossRef]
- Garnet, Ofer, Avishai Mandelbaum, and Martin. I. Reiman. 2002. Designing a call center with impatient customers. Manufacturing & Service Operations Management 4: 208–27. [Google Scholar]
- Glorot, Xavier, and Yoshua Bengio. 2010. Understanding the difficulty of training deep feedforward neural networks. Journal of Machine Learning Research 9: 249–56. [Google Scholar]
- Hai Nguyen, Thien, Kiyoaki Shirai, and Julien Velcin. 2015. Sentiment analysis on social media for stock movement prediction. Expert Systems with Applications 42: 9603–11. [Google Scholar] [CrossRef]
- Hecht, Amelie A., Crystal L. Perez, Michele Polascek, Anne N. Thorndike, Rebecca L. Franckle, and Alyssa J. Moran. 2020. Influence of food and beverage companies on retailer marketing strategies and consumer behavior. International Journal of Environmental Research and Public Health 17: 7381. [Google Scholar] [CrossRef]
- Hinton, Geoffrey. 2002. Training products of experts by minimizing contrastive divergence. Neural Computation 14: 1771–800. [Google Scholar] [CrossRef] [PubMed]
- Hinton, Geoffrey E., and Ruslan R. Salakhutdinov. 2006. Reducing the dimensionality of data with neural networks. Science Journal 313: 504–7. [Google Scholar] [CrossRef] [Green Version]
- Kaiser, Henry F. 2016. The application of electronic computers to factor analysis. Educational & Psychological Measurement 20: 141–51. [Google Scholar]
- Le Roux, Nicolas, and Yoshua Bengio. 2008. Representational power of restricted Boltzmann machines and deep belief networks. Neural Computation 20: 1631–42. [Google Scholar] [CrossRef] [PubMed]
- Li, Xiaodong, Haoran Xie, Yangqiu Song, Shanfeng Zhu, Qing Li, and Fu Lee Wang. 2015. Does summarization help stock prediction? News impact analysis via summarization. IEEE Intelligent Systems 30: 26–34. [Google Scholar] [CrossRef]
- Li, Xiumin, Lin Yang, Fangzheng Xue, and Hongjun Zhou. 2017. Time series prediction of stock price using deep belief networks with intrinsic plasticity. Paper presented at Control and Decision Conference, Chongqing, China, May 28–29; pp. 1237–42. [Google Scholar]
- Lu, Yina, Andrés Musalem, Marcelo Olivares, and Ariel Schilkrut. 2012. Measuring the effect of queues on customer purchases. Measurement Science 59: 1743–63. [Google Scholar] [CrossRef] [Green Version]
- Prachyachuwong, Kittisak, and Peerapon Vateekul. 2021. Stock trend prediction using deep learning approach on technical indicator and industrial specific information. Information 12: 250. [Google Scholar] [CrossRef]
- Rundo, Francesco, Francesca Trenta, Agatino Luigi di Stallo, and Sebastiano Battiato. 2019. Machine learning for quantitative finance applications: A survey. Applied Sciences 9: 5574. [Google Scholar] [CrossRef] [Green Version]
- Salakhutdinov, Ruslan, and Geoffrey Hinton. 2009. Semantic Hashing. International Journal of Approximate Reasoning 50: 969–78. [Google Scholar] [CrossRef] [Green Version]
- Van Velthoven, J., Benny Van Houdt, and Chris Blondia. 2005. Response time distribution in a D-MAP/PH/1 queue with general customer impatience. Stochastic Models 21: 745–65. [Google Scholar] [CrossRef]
- Wang, Yao, Wan-Dong Cai, and Peng-Cheng Wei. 2016. A deep learning approach for detecting malicious javascript code. Security & Communication Networks 9: 1520–34. [Google Scholar]
- Xu, Jun, Lei Xiang, Qingshan Liu, Hannah Gilmore, Jianzhong Wu, Jinghai Tang, and Anant Madabhushi. 2016. Stacked sparse autoencoder for nuclei detection of breast cancer histopathology images. IEEE Transactions on Medical Imaging 35: 1193–204. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Zohar, Ety, Avishai Mandelbaum, and Nahum Shimkin. 2002. Adaptive behavior of impatient customers in tele-queues: Theory and empirical support. Management Science 48: 566–83. [Google Scholar] [CrossRef] [Green Version]

DAE Network Structure | Number of Network Layers | Training Error (MSE) | Test Error (MSE) |
---|---|---|---|

48-5 | 2 | 0.00051 ± 0.00021 | 0.0054 ± 0.0009 |

48-24-5 | 3 | 0.00041 ± 0.00016 | 0.0030 ± 0.0004 |

48-24-12-5 | 4 | 0.00036 ± 0.00015 | 0.0025 ± 0.0007 |

48-30-20-10-5 | 5 | 0.00047 ± 0.00012 | 0.0034 ± 0.0005 |

48-40-30-20-10-5 | 6 | 0.00670 ± 0.00170 | 0.0220 ± 0.0040 |

Network Structure | Reconstruction Error |
---|---|

48-5 | 20.60 |

48-24-5 | 17.10 |

48-24-12-5 | 15.60 |

48-30-20-10-5 | 16.70 |

48-40-30-20-10-5 | 19.79 |

Algorithm | Error | ||
---|---|---|---|

MAE | MSE | MRE | |

FA_BP | 0.062 | 0.0045 | 9.72% |

PCA_BP | 0.127 | 0.02 | 17.9% |

Proposed DAE_BP | 0.043 | 0.0025 | 6.94% |

Algorithm | Variance | ||||
---|---|---|---|---|---|

D1 | D2 | D3 | D4 | D5 | |

FA | 0.049 | 0.027 | 0.039 | 0.030 | 0.015 |

PCA | 0.039 | 0.015 | 0.038 | 0.009 | 0.018 |

DAE | 0.053 | 0.073 | 0.086 | 0.051 | 0.056 |

Algorithm | Running Time | Error | ||
---|---|---|---|---|

MAE | MSE | MRE | ||

SVR | 9.16 ms | 0.056 | 0.0039 | 7.71% |

MLR | 0.13 ms | 0.160 | 0.029 | 22.9% |

MLP | 0.44 ms | 0.063 | 0.0047 | 9.98% |

Proposed | 0.29 ms | 0.043 | 0.0025 | 6.94% |

Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations. |

© 2022 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).

## Share and Cite

**MDPI and ACS Style**

Arabyat, Y.A.; AlZubi, A.A.; Aldebei, D.M.; Al-oqaily, S.Z.
An Efficient Method for Pricing Analysis Based on Neural Networks. *Risks* **2022**, *10*, 151.
https://doi.org/10.3390/risks10080151

**AMA Style**

Arabyat YA, AlZubi AA, Aldebei DM, Al-oqaily SZ.
An Efficient Method for Pricing Analysis Based on Neural Networks. *Risks*. 2022; 10(8):151.
https://doi.org/10.3390/risks10080151

**Chicago/Turabian Style**

Arabyat, Yaser Ahmad, Ahmad Ali AlZubi, Dyala M. Aldebei, and Samerra’a Ziad Al-oqaily.
2022. "An Efficient Method for Pricing Analysis Based on Neural Networks" *Risks* 10, no. 8: 151.
https://doi.org/10.3390/risks10080151