Next Article in Journal
Surface Characterization of New Azulene-Based CMEs for Sensing
Next Article in Special Issue
Numerical Modeling of Ice Accumulation on Three-Dimensional Bridge Cables under Freezing Rain and Natural Wind Conditions
Previous Article in Journal
New Symmetries, Conserved Quantities and Gauge Nature of a Free Dirac Field
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Case Study of Deep Learning Model of Temperature-Induced Deflection of a Cable-Stayed Bridge Driven by Data Knowledge

1
Key Laboratory of Concrete and Prestressed Concrete Structures of the Ministry of Education, Southeast University, Nanjing 210096, China
2
Shenzhen Express Engineering Consulting Co., Ltd., Shenzhen 518000, China
*
Author to whom correspondence should be addressed.
Symmetry 2021, 13(12), 2293; https://doi.org/10.3390/sym13122293
Submission received: 11 October 2021 / Revised: 17 November 2021 / Accepted: 23 November 2021 / Published: 2 December 2021

Abstract

:
A cable-stayed bridge is a typical symmetrical structure, and symmetry affects the deformation characteristics of such bridges. The main girder of a cable-stayed bridge will produce obvious deflection under the inducement of temperature. The regression model of temperature-induced deflection is hoped to provide a comparison value for bridge evaluation. Based on the temperature and deflection data obtained by the health monitoring system of a bridge, establishing the correlation model between temperature and temperature-induced deflection is meaningful. It is difficult to complete a high-quality model only by the girder temperature. The temperature features based on prior knowledge from the mechanical mechanism are used as the input information in this paper. At the same time, to strengthen the nonlinear ability of the model, this paper selects an independent recurrent neural network (IndRNN) for modeling. The deep learning neural network is compared with machine learning neural networks to prove the advancement of deep learning. When only the average temperature of the main girder is input, the calculation accuracy is not high regardless of whether the deep learning network or the machine learning network is used. When the temperature information extracted by the prior knowledge is input, the average error of IndRNN model is only 2.53%, less than those of BPNN model and traditional RNN. Combining knowledge with deep learning is undoubtedly the best modeling scheme. The deep learning model can provide a comparison value of bridge deformation for bridge management.

1. Introduction

With the development of the economy, the transportation network has gradually expanded, so there are increasingly more bridges being built across rivers, lakes, and seas [1]. Due to the progress of construction technology and industrial technology, cable-stayed bridges have become more popular in long-span bridges [2]. To protect the operational lifetime of the project, structural health monitoring (SHM) systems are installed on the cable-stayed bridges to observe the service state of the bridges in real-time [3]. When the bridge fails, the SHM systems would be expected to sense the failure event to prevent a major accident. To achieve this goal, we must mine the big data accumulated by the SHM system as much as possible. Therefore, the management and maintenance of cable-stayed bridges based on the big data provided by SHM have become a hot topic [4].
Cable-stayed bridges are typical symmetrical structures, and symmetry affects the deformation characteristics of such bridges. The main girder deflection of a cable-stayed bridge is an important embodiment of its service performance [5]. Under the effect of temperature, the deflection of the main girder changes slowly, that is, temperature-induced deflection [6]. The temperature-induced deflection determines the quasi-static state of the bridge. Therefore, if we can establish a regression model which can express the correlation between the bridge temperature and temperature-induced deflection, the temperature-induced deflection under the temperature effect can be output by the measured temperature information and the output temperature-induced deflection can be used as the baseline of the normal state of the bridge. After obtaining the measured value of temperature-induced deflection, we can compare the measured value with the output value from the regression model, and thus we can know whether there is an abnormality in the working state of the bridge once the difference between the two is too large. Therefore, the establishment of a high-precision regression model of the structural response is the focus of bridge engineering [7]. Based on the big data obtained by SHM, the linear regression model between girder temperature and temperature-induced deflection is established, but the accuracy is poor [8]. After replacing with powerful machine learning tools, the accuracy of the regression model between the main girder and the temperature-induced deflection has been greatly improved, but the error is still unsatisfactory [9].
In fact, the temperature-induced deflection of the main girder of a cable-stayed bridge is affected by each component on the bridge. According to the knowledge from mechanism research, the dispersed temperature field, which affects the temperature-induced deflection of cable-stayed bridge, can be summarized as data features such as the average temperature of main girder, the vertical temperature difference of main girder and the tower temperature. Mechanical knowledge explains the causes of temperature-induced deflection, and scholars have established a temperature-induced deflection model according to the mechanism [10]. However, the modeling process based on the mechanism is cumbersome, and therefore is less practical.
Obviously, the existing regression models have two problems: one is that the temperature information is not clear and sufficient; second, the timing modeling performance of fitting tools still needs to be improved. In this paper, prior knowledge will be used to extract the temperature features and reduce the data dimension; deep learning is used as a fitting tool to strengthen the performance in time series regression, to enhance the robustness of the established model [11]. This paper will verify the advantages of deep learning technology and explore the optimal data cost required to establish the model. The deep learning model is expected to obtain better accuracy, so it can contribute to a more reliable and more efficient recognition of the bridge state.

2. Data Set for Model

2.1. Temperature Information Based on Prior Knowledge

This paper uses the data of the Tongling Yangtze River Bridge in Anhui Province, China. Tongling Yangtze River Bridge is a highway–railway cable-stayed bridge connecting Tongling and Wuhu. Figure 1 shows the elevation of Tongling Yangtze River Bridge. The main span of the bridge reaches 630 m and the total length is 1290 m. It is a typical double tower cable-stayed bridge. To monitor the temperature field of this bridge, several temperature sensors are installed on the main girder and cable tower. To monitor the deflection, a displacement sensor is installed in the middle of the main span. Section 1-1 and Section 2-2 show the detailed location of these sensors.
As shown in Figure 2, Section 1-1 is the midspan section of the main girder. As shown in Figure 3, Section 2-2 is the cross-section of the tower. In this bridge, twelve temperature sensors are installed in the main girder, and eight temperature sensors are installed surrounding the tower. The deflection sensor is installed at the bottom center of the main girder.
If we use the data of all temperature sensors for modeling, too large a data scale will lead to a sharp increase in modeling cost, and invalid information will lead to a decrease in fitting accuracy [12,13]. Therefore, this paper determines to extract temperature features based on the verification knowledge. Taking the research on the mechanism of temperature-induced deflection as a priori knowledge and combining the monitoring information of the bridge, the temperature field of the whole bridge can be summarized as the main girder temperature, the main girder temperature difference, and the tower temperature [14]. Therefore, we take the average value of sensors W1~W12 as the average temperature of the girder, subtract W3 from W10 as the vertical temperature difference of the main girder, and the average value of T1~T8 as the tower temperature. The sampling frequency of temperature data in this paper is set as ever 10 min one time [15,16]. The variable of the average temperature of the main girder is noted as W, the vertical temperature difference of the main girder is noted as WD, and the vertical cable tower temperature is noted as T. W, WD and T in one day, are shown in Figure 4.
As shown in Figure 4, the temperature features look like sine waves because of following sunrise and sunset, and the trend of temperature-induced deflection is similar to the temperature feature. Next, we will extract the temperature-induced deflection.

2.2. Temperature-Induced Deflection and Data Set

The time history curve of deflection sensor D in a single day is shown in Figure 5. The raw data shows a lot of high-frequency information like a fishbone. The high-frequency information is caused by vehicles. We used a 10 min averaging method to extract the temperature-induced deflection to achieve the time stamp alignment with the temperature data [15]. As shown in Figure 5, the temperature-induced deflection ups and downs are just like the temperature information. The variable of temperature-induced deflection is noted as D.
We used the SHM system to obtain the time series data of the bridge within nine months. After clearing the vacancy value in the common position of all four data variables, there are 31,103 data points left for each variable. As shown in Figure 6, the first 75% of data is defined as the training set for training the neural network, and the last 25% of data is defined as the test set for testing the neural network.

3. Fitting Methods

3.1. Back Propagation Neural Network (BPNN)

BPNN is one type of artificial neural network (ANN) belonging to machine learning. Compared with general ANN, BPNN has the function of back propagation [17]. As shown in Figure 7, in BPNN, data x1, …, xt is the input into the hidden layer from the input layer; then, the information will be operated with weight coefficient (Wtm) and be transmitted from input layer to hidden layer. In the hidden layer, the information will be processed with sigmoid activation function σ, then the processed information will be operated with weight coefficient (Wm); finally, the regression value y’ is obtained through the full connection layer. The above is the forward propagation in BPNN. Then, the back propagation will be used for optimizing the parameters in the neural network.
Back propagation is a gradient descent algorithm, which uses the loss between the actual value and regression value as optimization index, can improve the accuracy of neural network by iterative training with several epochs [18]. BPNN is the traditional machine learning fitting tool, and there are many introductions about BPNN in the existing literature, so this paper will not repeat these.

3.2. Recurrent Neural Network (RNN)

RNN is one type of neural network belonging to deep learning technology. Compared with the machine learning network, the deep learning network has a deeper hidden layer and more complex computing cell, so deep learning usually has stronger performance than machine learning [19]. Scholars created RNN to improve the performance of neural networks. To expression timing performance, RNN has the structure in expanding horizontal depth, and thus RNN has the time transmission characteristic.
As shown in Figure 8, being different from BPNN, the time series data are input into the recurrent hidden layer of RNN at different times. The RNN cell at the current time transmits data to the output layer while transmitting data to the next time, and the data from the above time will be input together.
We illustrate the cell of RNN at time t. As shown in Figure 9, at time t, xt at this time and the hidden output value ht−1 from the previous time will be accepted. These two values are combined with the weights W and U, respectively. The combined value is activated through the tanh function to obtain the output value ht at this time. ht passes in the hidden cell to the next time and be combined with the weight V to obtain the output value yt at the current time. This is the lateral depth of RNN, which is the important reason why RNN has a stronger performance in timing modeling than BPNN.
Taking time t as an example, we illustrate that the data operation in RNN cell. xt and ht−1 are first combined with weight U, weight W, and bias b, and then calculated with tanh activation function to obtain the value ht, which is input to the next time. Multiply ht with the weight V, and then substitute the multiplied value to σ activation function, finally obtaining the output value yt. Calculation formulas are shown in Equations (1) and (2).
h t = t a n h   ( U x t + W h t 1 + b )
y t = σ   ( V h t )

3.3. Independently Recurrent Neural Network (IndRNN)

Even RNN constructs the time link between the hidden cells, RNN is hard to express long-term relationship and is difficult to construct vertical depth because RNN is prone to gradient explosion or gradient disappearance. If a deeper RNN network were not be constructed, only simple problem modeling could be carried out, which will greatly limit the advantages of RNN [20]. If RNN could solve gradient explosion or gradient disappearance, RNN would absorb the massive information and obtain strong fitting performance.
To solve the problem in the traditional RNN, scholars put forward an independently recurrent neural network (IndRNN) [21]. The reason, which causes gradient explosion or gradient disappearance of RNN, is the hyperbolic tangent function tanh and sigmoid function σ in RNN hidden cell. In the IndRNN cell, the activation function is selected as the ReLU function, so the cell will have strong robustness and is feasible for multiple hidden layers [21]. As shown in Figure 10, ReLU function is used as a substitute for tanh and σ.
In the IndRNN cell, ht−1 is be described as Equation (3):
h t = R e L U   ( U x t + W h t 1 + b )
where ⊙ is represents thr Hadamard product, b is the bias vector in the hidden cell. Activation function ReLU can be described as Equation (4):
R e L U = R ( x ) = x , x 0 0 , x < 0
As shown in Figure 11, IndRNN can be stacked with multiple layers. Compared with the traditional RNN, IndRNN assigns batch normalization (BN) before each input layer and after each output layer to avoid the covariate shift between the hidden layers [22].
In the study of SHM, the progress of the application of deep learning is fast [23,24], so it is necessary to experiment with different kinds of neural networks [25]. We will use the above three fitting tools to model the relationship between temperature variables and temperature-induced deflection.

4. Temperature-Induced Deflection Model

4.1. Neural Network Model

In this paper, the operation flow of BPNN/RNN/IndRNN is consistent, and the differences are located in the neurons used in the hidden layer. As shown in Figure 12a, the first phase is the training phase. In the training phase, the first step is to normalize the data of the training set. The normalization equation can use conventional min–max normalization, which is the common sense of deep learning, so this article will not repeat. Then input the data into the hidden layer and the fully connected layer, and regression value y’ can be obtained.
Then, the error between the regression value y’ and the real value y are calculated as loss for the back propagation. The equation of loss is Equation (5). Presetting training epochs, the training phase will end when the predetermined epoch is reached. Then, we can get a confirmed model.
l o s s = 1 N n = 1 N y n y n 2
As Figure 12b shown, the confirmed model will be tested for whether the model is qualified for application. Then we will compare the performance of the three kinds of network.

4.2. Model Based on Non-Mechanism Temperature Feature

If we put aside the mechanism of temperature-induced deflection of cable-stayed bridges, we will believe that the temperature-induced deflection of the main girder is driven by the temperature of the main girder. So building the model based on non-mechanism temperature feature only needs to input temperature feature W. Referring to the research of the time series property of bridge temperature [15], we selected 30 input points corresponding to one output point, and the time shift mode is shown in Figure 13.
We trained and tested the three kinds of neural networks according to the process in Figure 12, with preset 100 epochs. Learning rate (lr) was set as 0.0001; batch size was set as 10; hidden layer had 64 cells. The loss curves in the training process are shown in Figure 14. The CPU of our computer is the Intel Core i7-7700K with an operation frequency of 4.20 GHz. The training time of BPNN is 158.58 s; the training time of RNN is 378.46 s; the training time of IndRNN is 405.64 s.
As shown in Figure 14a,b, in the training phase and test phase, the curves of three kinds of neural networks are all convergent, but that of IndRNN is undoubtedly the one with the smallest error. We put the test set into the trained model; the regression value and actual value are shown in Figure 15.
As shown Figure 15, no matter what kind of neural network is used, the error is not satisfactory when only W is input. The average error of BPNN was 11.12%, RNN was 10.57%, and IndRNN was 9.69%. Obviously, whether BPNN belonging to machine learning or RNN and IndRNN belonging to deep learning is used, the satisfactory model can not be completed by only inputting W. Therefore, we needed to further explore knowledge-driven input information.

4.3. Model Based on Temperature Feature Driven by Knowledge

Obviously, it is impossible to produce a high-performance temperature-induced deflection model without prior knowledge, even with advanced fitting tools. Therefore, we need to input three temperature features W, WD, and T obtained based on prior knowledge. When three temperature variables are input, the data relationship mode is different due to the different principles of the neural network. Take time t as an example. As shown in Figure 16, for BPNN, the data of three temperature features at different times are input into the neural network model together. For RNN/IndRNN, because of the temporal modeling attribute, three temperature features, at the same time, are integrated into a vector for input.
In the neural networks with prior knowledge, the learning rate (lr) was still 0.0001; batch size was set as 10; each hidden layer had 64 cells and the number of hidden layers was improved to two. After inputting three temperature features, the loss curves in the training process are shown in Figure 17. The training time of BPNN was 422.04 s; the training time of RNN was 1125.50 s; the training time of IndRNN was 1569.78 s.
As shown in Figure 17a,b, in the training phase and test phase, the curves of three kinds of neural networks are all convergent, but that of IndRNN is undoubtedly the one with the smallest error. When three kinds of temperature variables were input, the loss curves undoubtedly converged to lower. We put the test set into the trained models by the neural networks with prior knowledge; the regression value and actual value are shown in Figure 18.
As shown Figure 18, after inputting more comprehensive temperature information, the output accuracy of the three models has been improved. The average error of BPNN was 6.57%, RNN was 4.76%, and IndRNN was only 2.53%. This result proves that when establishing the temperature-induced deflection model of the main girder of a cable-stayed bridge, sufficient temperature information must be input, and the temperature features extracted based on the mechanism are undoubtedly appropriate. Compared with traditional machine learning, deep learning technology undoubtedly has stronger nonlinear fitting performance and thus has higher accuracy of regression model. Compared with RNN, IndRNN has better modeling effect, and the accuracy of it is the highest among the three models because of its stronger robustness. Therefore, this paper suggests that when establishing the temperature-induced deflection model of the main girder of a cable-stayed bridge, the temperature features can be extracted based on the prior knowledge by the mechanical mechanism. The fitting method can be chose from the tools that with nonlinear robustness such as IndRNN and other deep learning tools.

5. Conclusions

Under the influence of the complex temperature field distributing in all components of a cable-stayed bridge, the main girder of the cable-stayed bridge will produce the temperature-induced deflection. Because the action mechanism is complex, establishing the correlation model between the temperature features and the temperature-induced deflection is a complex project. The difficulty is usually considered to be two aspects of information extraction and fitting tools. To establish a more accurate temperature-induced deflection model, this paper attempts to use the priori knowledge from the mechanical mechanism to extract the appropriate temperature features, and tries to use a deep learning tool with more powerful nonlinear expression performance. The conclusions are as follows:
(1)
When establishing the temperature-induced deflection model of a cable-stayed bridge, we should not only input the temperature data of the main girder, but also input the temperature information obtained by the prior knowledge. Through the mechanical mechanism of the temperature-induced deflection of a cable-stayed bridge, this paper obtains three kinds of temperature features, namely, the average temperature of the main girder, the vertical temperature difference of the main girder, and the temperature of the tower. Only by inputting the temperature information obtained by the prior knowledge can a good model be established.
(2)
The effect of using BPNN, which belongs to traditional machine learning to establish the temperature-induced deflection model is worse than using deep learning algorithms. When only the average temperature of the main girder is input, the average error of the model established by BPNN is 11.12%. After inputting three temperature features, the average error is reduced to 6.57%.
(3)
Benefiting from the stronger nonlinear modeling performance, the model established by deep learning has higher accuracy. When only the average temperature of the main girder is input, the average error of the model established by IndRNN is 9.96%. After inputting three temperature features, the output error is reduced to only 2.53%. Deep learning is undoubtedly a better tool for bridge response modeling.

Author Contributions

Conceptualization, Z.Y. and Y.D.; funding acquisition, Y.D. and H.Z.; investigation, Z.W.; methodology, Z.Y. and Y.D.; resources, Y.D. and H.Z; software, Z.Y., Y.D. and H.Z.; validation, Z.Y.; writing—original draft, Z.Y.; writing—review and editing, Y.D. All authors have read and agreed to the published version of the manuscript.

Funding

The Fund for Distinguished Young Scientists of Jiangsu Province (Grant BK20190013), the National Natural Science Foundation of China (Grants 51978154, 52008099, and 51608258), Natural Science Foundation of Jiangsu Province (Grant. BK20200369), and the Fund for Jiangsu Graduate Research and Practice Innovation Program (Grant KYCX21_0116).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Acknowledgments

The authors would like to thank the editors and the anonymous reviewers for their valuable comments on the content and the presentation of this article.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Song, S.Y.; Guo, J.; Su, Q.K. Technical challenges in the construction of bridge-tunnel sea-crossing projects in China. J. Zhejiang Univ. Sci. A Appl. Phys. Eng. 2020, 21, 509–513. [Google Scholar] [CrossRef]
  2. Zhang, L.; Qiu, G.; Chen, Z. Structural health monitoring methods of cables in cable-stayed bridge: A review. Measurement 2020, 168, 108343. [Google Scholar] [CrossRef]
  3. Miyamoto, A.; Motoshita, M. A Study on the intelligent bridge with an advanced monitoring system and smart control techniques. Smart Struct. Syst. 2017, 19, 587–599. [Google Scholar]
  4. Yan, Z.G.; Yue, Q.; Shi, Z. Design of Structural Health Monitoring System for Hutong Changjiang River Bridge. Bridge Constr. 2017, 47, 7–12. [Google Scholar]
  5. Xia, G.P. Parametric Study of Cable Deflection and Gravity Stiffness of Cable-Stayed Suspension Bridge. Appl. Mech. Mater. 2014, 488, 445–448. [Google Scholar] [CrossRef]
  6. Ding, Y.; Wang, G.; Zhou, G.; Li, A. Life-cycle simulation method of temperature field of steel box girder for Runyang cable-stayed bridge based on field monitoring data. China Civ. Eng. J. 2013, 46, 129–136. [Google Scholar]
  7. Xu, X.; Xu, C.; Zhang, Y.; Wang, H. Preliminary Study on the Loss Laws of Bearing Capacity of Tunnel Structure. Symmetry 2021, 13, 1951. [Google Scholar] [CrossRef]
  8. Li, H. Study on Mechanical Characteristics of Railway Cable-Stayed Bridges with High and Low Towers. China Railw. Sci. 2019, 40, 54–59. [Google Scholar]
  9. Yang, D.H.; Yi, T.H.; Li, H.N.; Zhang, Y.F. Correlation-Based Estimation Method for Cable-Stayed Bridge Girder Deflection Variability under Thermal Action. J. Perform. Constr. Facil. 2018, 32, 04018070. [Google Scholar] [CrossRef]
  10. Zhou, Y.; Sun, L.; Fu, Z.; Ren, P. General formulas for estimating temperature-induced mid-span vertical displacement of cable-stayed bridges. Eng. Struct. 2020, 221, 111012. [Google Scholar] [CrossRef]
  11. Pedraza, A.; Deniz, O.; Bueno, G. On the Relationship between Generalization and Robustness to Adversarial Examples. Symmetry 2021, 13, 817. [Google Scholar] [CrossRef]
  12. Porter, W.A.; Liu, W. On the performance of higher order moment neural computation. Inf. Sci. Appl. 1995, 3, 179–191. [Google Scholar] [CrossRef]
  13. Korczak, J.; Hammadi-Mesmoudi, F. A way to improve an architecture of neural network classifier for remote sensing applications. Neural Process. Lett. 1994, 1, 13–16. [Google Scholar] [CrossRef]
  14. Zhou, Y.; Sun, L. Effects of environmental and operational actions on the modal frequency variations of a sea-crossing bridge: A periodicity perspective. Mech. Syst. Signal Process. 2019, 131, 505–523. [Google Scholar] [CrossRef]
  15. Liu, H.; Ding, Y.-L.; Zhao, H.-W.; Wang, M.-Y.; Geng, F.-F. Deep learning-based recovery method for missing structural temperature data using LSTM network. Struct. Monit. Maint. 2020, 7, 109–124. [Google Scholar]
  16. Mei, X.D.; Lu, Y.Y.; Shi, J. Temperature monitoring and analysis of a long-span cable-stayed bridge during construction period. Struct. Monit. Maint. 2021, 8, 203–220. [Google Scholar]
  17. Sadeghi, B. A BP-neural network predictor model for plastic injection molding process. J. Mater. Process. Technol. 2000, 103, 411–416. [Google Scholar] [CrossRef]
  18. Stuart, G.; Spruston, N.; Sakmann, B.; Häusser, M. Action potential initiation and backpropagation in neurons of the mammalian CNS. Trends Neurosci. 2016, 134, 440–444. [Google Scholar] [CrossRef]
  19. Williams, R.J.; Zipser, D. A Learning Algorithm for Continually Running Fully Recurrent Neural Networks. Neural Comput. 1998, 1, 270–280. [Google Scholar] [CrossRef]
  20. Chen, Y.; Cheng, Q.; Cheng, Y.; Yang, H.; Yu, H. Applications of Recurrent Neural Networks in Environmental Factor Forecasting: A Review. Neural Comput. 2018, 30, 2855–2881. [Google Scholar] [CrossRef] [PubMed]
  21. Zhang, P.; Meng, J.; Luan, Y.; Liu, C. Plant miRNA-lncRNA Interaction Prediction with the Ensemble of CNN and IndRNN. Interdiscip. Sci. Comput. Life Sci. 2020, 12, 82–89. [Google Scholar] [CrossRef]
  22. Li, Y.; Wang, N.; Shi, J.; Liu, J.; Hou, X. Revisiting Batch Normalization for Practical Domain Adaptation. Pattern Recognit. 2016, 80, 109–117. [Google Scholar] [CrossRef]
  23. Yaghoubi, V.; Cheng, L.; Van, P.W.; Kersemans, M. An ensemble classifier for vibration-based quality monitoring. Mech. Syst. Signal Process. 2022, 165, 108341. [Google Scholar] [CrossRef]
  24. Yaghoubi, V.; Liangliang, C.; Wim, V.P.; Mathias, K. A novel multi-classifier information fusion based on Dempster–Shafer theory: Application to vibration-based fault detection. Struct. Health Monit. 2020, 14759217211007130. [Google Scholar] [CrossRef]
  25. Wang, Z.-Y.; Lu, C.; Zhou, B. Fault diagnosis for rotary machinery with selective ensemble neural networks. Mech. Syst. Signal Process. 2018, 113, 112–130. [Google Scholar] [CrossRef]
Figure 1. The elevation of the Tongling Yangtze River bridge.
Figure 1. The elevation of the Tongling Yangtze River bridge.
Symmetry 13 02293 g001
Figure 2. Section of main girder (Section 1-1).
Figure 2. Section of main girder (Section 1-1).
Symmetry 13 02293 g002
Figure 3. Section of the tower (Section 2-2).
Figure 3. Section of the tower (Section 2-2).
Symmetry 13 02293 g003
Figure 4. W, WD, and T in one day. (a) Temperature of W. (b) Temperature of WD. (c) Temperature of T.
Figure 4. W, WD, and T in one day. (a) Temperature of W. (b) Temperature of WD. (c) Temperature of T.
Symmetry 13 02293 g004
Figure 5. Temperature-induced deflection in one day.
Figure 5. Temperature-induced deflection in one day.
Symmetry 13 02293 g005
Figure 6. W, WD, T and D in nine months.
Figure 6. W, WD, T and D in nine months.
Symmetry 13 02293 g006
Figure 7. The typical architecture of BPNN.
Figure 7. The typical architecture of BPNN.
Symmetry 13 02293 g007
Figure 8. The typical architecture of RNN.
Figure 8. The typical architecture of RNN.
Symmetry 13 02293 g008
Figure 9. One RNN cell in the hidden layer.
Figure 9. One RNN cell in the hidden layer.
Symmetry 13 02293 g009
Figure 10. One IndRNN cell in the hidden layer.
Figure 10. One IndRNN cell in the hidden layer.
Symmetry 13 02293 g010
Figure 11. The typical architecture of IndRNN.
Figure 11. The typical architecture of IndRNN.
Symmetry 13 02293 g011
Figure 12. Training phase and test phase of machine learning: (a) Training phase of neural network; (b) Test phase of neural network.
Figure 12. Training phase and test phase of machine learning: (a) Training phase of neural network; (b) Test phase of neural network.
Symmetry 13 02293 g012
Figure 13. The time-moving relationship between W and D.
Figure 13. The time-moving relationship between W and D.
Symmetry 13 02293 g013
Figure 14. The loss curves in training and test phases of the three kinds of neural network: (a) Training phase; (b) Test phase.
Figure 14. The loss curves in training and test phases of the three kinds of neural network: (a) Training phase; (b) Test phase.
Symmetry 13 02293 g014
Figure 15. Regression values of three kinds of neural network: (a) BPNN; (b) RNN; (c) IndRNN.
Figure 15. Regression values of three kinds of neural network: (a) BPNN; (b) RNN; (c) IndRNN.
Symmetry 13 02293 g015aSymmetry 13 02293 g015b
Figure 16. The relationship between temperature features and temperature-induced deflection at time t: (a) The relationship between W, WD, T and D at time t of BPNN; (b) The relationship between W, WD, T and D at time t of RNN/IndRNN.
Figure 16. The relationship between temperature features and temperature-induced deflection at time t: (a) The relationship between W, WD, T and D at time t of BPNN; (b) The relationship between W, WD, T and D at time t of RNN/IndRNN.
Symmetry 13 02293 g016
Figure 17. The loss curves in the training phase and test phase of the three kinds of neural networks with three temperature variables by prior knowledge: (a) Training phase; (b) Test phase.
Figure 17. The loss curves in the training phase and test phase of the three kinds of neural networks with three temperature variables by prior knowledge: (a) Training phase; (b) Test phase.
Symmetry 13 02293 g017
Figure 18. Regression value of the three kinds of neural network with prior knowledge: (a) BPNN; (b) RNN; (c) IndRNN.
Figure 18. Regression value of the three kinds of neural network with prior knowledge: (a) BPNN; (b) RNN; (c) IndRNN.
Symmetry 13 02293 g018
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Yue, Z.; Ding, Y.; Zhao, H.; Wang, Z. Case Study of Deep Learning Model of Temperature-Induced Deflection of a Cable-Stayed Bridge Driven by Data Knowledge. Symmetry 2021, 13, 2293. https://doi.org/10.3390/sym13122293

AMA Style

Yue Z, Ding Y, Zhao H, Wang Z. Case Study of Deep Learning Model of Temperature-Induced Deflection of a Cable-Stayed Bridge Driven by Data Knowledge. Symmetry. 2021; 13(12):2293. https://doi.org/10.3390/sym13122293

Chicago/Turabian Style

Yue, Zixiang, Youliang Ding, Hanwei Zhao, and Zhiwen Wang. 2021. "Case Study of Deep Learning Model of Temperature-Induced Deflection of a Cable-Stayed Bridge Driven by Data Knowledge" Symmetry 13, no. 12: 2293. https://doi.org/10.3390/sym13122293

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop