Next Article in Journal
NZEB Analyses by Means of Dynamic Simulation and Experimental Monitoring in Mediterranean Climate
Next Article in Special Issue
Electric Power System Transformations: A Review of Main Prospects and Challenges
Previous Article in Journal
A Comparison of Various Bottom-Up Urban Energy Simulation Methods Using a Case Study in Hangzhou, China
Previous Article in Special Issue
A Multi-Agent NILM Architecture for Event Detection and Load Classification
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Predicting Temperature of Permanent Magnet Synchronous Motor Based on Deep Neural Network

1
Post-Doctoral Workstation of Electronic Engineering, Heilongjiang University, Harbin 150080, China
2
College of Computer Science and Technology, Dalian Minzu University, Dalian 116650, China
3
College of Electronic and Electrical Engineering, Harbin University of Science and Technology, Harbin 150080, China
4
Faculty of Electronic Information and Electrical Engineering, Dalian University of Technology, Dalian 116650, China
*
Author to whom correspondence should be addressed.
Energies 2020, 13(18), 4782; https://doi.org/10.3390/en13184782
Submission received: 30 July 2020 / Revised: 9 September 2020 / Accepted: 10 September 2020 / Published: 14 September 2020
(This article belongs to the Special Issue Machine Learning for Energy Systems 2021)

Abstract

:
The heat loss and cooling modes of a permanent magnet synchronous motor (PMSM) directly affect the its temperature rise. The accurate evaluation and prediction of stator winding temperature is of great significance to the safety and reliability of PMSMs. In order to study the influencing factors of stator winding temperature and prevent motor insulation ageing, insulation burning, permanent magnet demagnetization and other faults caused by high stator winding temperature, we propose a computer model for PMSM temperature prediction. Ambient temperature, coolant temperature, direct-axis voltage, quadrature-axis voltage, motor speed, torque, direct-axis current, quadrature-axis current, permanent magnet surface temperature, stator yoke temperature, and stator tooth temperature are taken as the input, while the stator winding temperature is taken as the output. A deep neural network (DNN) model for PMSM temperature prediction was constructed. The experimental results showed the prediction error of the model (MAE) was 0.1515, the RMSE was 0.2368, the goodness of fit (R2) was 0.9439 and the goodness of fit between the predicted data and the measured data was high. Through comparative experiments, the prediction accuracy of the DNN model proposed in this paper was determined to be better than other models. This model can effectively predict the temperature change of stator winding, provide technical support to temperature early warning systems and ensure safe operation of PMSMs.

Graphical Abstract

1. Introduction

The thermal loss and cooling modes of the permanent magnet synchronous motor (PMSM) directly affect its temperature rise [1,2,3]. The heat loss of the PMSM mainly includes copper loss, iron loss and mechanical loss. The iron loss mainly depends on the stator’s voltage, and the mechanical loss mainly depends on the rotor speed. Different from iron loss and mechanical loss, the copper loss of a permanent magnet motor stator directly affects the heating degree of the stator winding. On one hand, the heat of the stator winding is first transferred to the insulation. On the other hand, compared with the winding and core, the insulation in the motor is the material with the worst heat resistance among all materials of the motor. In the engineering field, the selection of an insulation grade of PMSM depends entirely on the temperature of the stator winding. When the temperature of the motor stator winding is too high, the insulation will be thermally aged, and decortication will even occur, which will seriously threaten the safe operation of the motor. In addition, if the permanent magnet motor winding heating cannot be effectively controlled, the heat of the stator winding will be further transmitted to the rotor side through the air gap, which will cause irreversible demagnetization of the permanent magnet. In conclusion, accurate evaluation and prediction of stator winding temperature is of great significance to the safety and reliability of permanent magnet motors.
In order to ensure safe operation, many experts and scholars have put forward methods to measure the temperature of a PMSM. Wallscheid et al. used an accurate flux observer in the fundamental wave domain that can indirectly obtain magnet temperature without any additional sensors or signal injections [4]. Mohammed et al. proposed a new sensing method, namely, investigating the application of dedicated electrically non-conductive and electromagnetic interference immune fiber Bragg grating (FBG) temperature sensors embedded in PMSM windings to enable winding open-circuit fault diagnosis based on observing the fault thermal signature [5,6]. However, when these methods are used to measure the temperature of a PMSM, a lot of experimental preparation is needed, which leads to high costs and tedious processes. For this reason, many scholars have begun to consider how to predict temperature after collecting enough data.
The traditional motor temperature prediction model mainly uses the finite element method [7,8]. The specific method is to simulate the transient temperature by using the finite element method, establish the temperature field, and then predict the motor temperature. This method can only be used to calculate and process the current linear data, but it cannot deal with a large number of nonlinear historical data. On the other hand, the method based on machine learning can effectively solve this problem, and the overall prediction effect can be greatly improved compared with the traditional methods. Many experts and scholars have done a lot of research and tried a lot of different methods using machine learning to predict motor temperature. Chen et al. used support vector machine to predict the hot spot temperature of an oil-immersed transformer, and verified its practicability and effectiveness on a large power transformer [9]. Hyeontae et al. used a variety of machine learning methods including decision tree to predict the end temperature of a Linz–Donawiz converter, which was used to improve the quality of melted pig iron. They injected pure oxygen into hot metal in order to remove impurities via oxidation-reduction reactions. The relevant simulation results were presented and compared with the real temperature [10]. The rise of ensemble learning has greatly promoted the development of motor temperature prediction models. Wang applied the stochastic forest method to the temperature prediction of a ladle furnace. The sample was divided into several subsets and the random forest method was applied to obtain higher accuracy than the other temperature models [11]. Zhukov et al. used ensemble methods of classification to assess power system security. The proposed hybrid approach was based on random forest models and boosting models. The experiment results showed that the proposed model can be employed to examine whether a power system is secure under steady-state operating conditions [12]. Su et al. integrated an extreme learning machine to establish a prediction model of iron temperatures in a blast furnace, and selected the corresponding influencing factors as input, combined with several extreme learning machines with different parameters This model achieved high prediction accuracy and generalization performance for molten iron temperature prediction [13].
In recent years, people have paid more and more attention to deep learning methods. The feature extraction of machine learning methods mainly depends on humans. For specific simple tasks, manual feature extraction is simple and effective, but for complex tasks, how to choose appropriate features can be an extremely difficult problem. The feature extraction of deep learning does not rely on manual extraction, and features are automatically extracted by the machine, which is often called end-to-end learning [14]. Therefore, deep learning methods have better versatility. Chinnathambi et al. established three kinds of deep neural network (DNN) to predict the day-head price of the Iberian electricity market [15]. Kasburg et al. used an Long Short Term Memory (LSTM) neural network to predict the photovoltaic power generation of an active solar tracker, and the results were promising [16]. Tao et al. proposed a short-term forecasting model based on deep learning for PM2.5 concentrations [17]. Sengar et al. combined the DNN with a chicken swarm optimization algorithm to realize the load forecasting problem of a wind power generation system [18]. Gui et al. proposed a multi-step time feature selection optimization model of temperature based on the DNN and genetic algorithm (GA), which effectively predicted the temperature of a reheater system [19]. Because of their strong learning ability, wide coverage and good adaptability, deep learning methods have been widely used in various fields.
However, deep learning methods have barely been applied in PMSM temperature prediction. In this paper, the prediction of stator winding temperature in PMSMs is studied. A PMSM consists of two key components, a rotor with a permanent magnet and a stator with properly designed windings. Figure 1 shows the structure of the PMSM. When the PMSM is operating, the temperature of the stator winding will rise due to the influence of copper loss. If the temperature is too high, the insulation will be thermally aged, and in serious cases, it will be shelled, threatening the safe operation of the motor. If the heat is transferred to the rotor side for a long time, it may cause irreversible demagnetization of the permanent magnet, which means the total destruction of the PMSM. This paper only uses a data-driven method to study the thermal management system of PMSMs, and provides a new research method as a means of aiding design and analysis of PMSMs. The main purpose of this study is to establish a stator winding temperature prediction model for PMSMs based on DNN and to verify its effectiveness.
This study employs the following structure. Section 2 introduces the basic structure of the DNN and establishes a PMSM stator winding temperature prediction model. In Section 3, the dataset used in this paper is introduced and the prediction performance of the proposed model is presented. Conclusions and prospections are drawn in Section 4.

2. Method Based on DNN

2.1. Establishment of a Stator Winding Temperature Prediction Model for PMSMs

There are many factors that after the stator winding temperature of a PMSM. In this paper, 11 variables are regarded as input, including ambient temperature (ambient), coolant temperature (coolant), direct-axis voltage (u_d), quadrature-axis voltage (u_q), motor speed (motor_speed), torque, direct-axis current (i_d), quadrature-axis current (i_q), permanent magnet surface temperature (pm), stator yoke temperature (stator_yoke) and stator tooth temperature (stator_tooth). Stator winding temperature (stator_winding) is regarded as output. Considering the high dimension of the independent variables, a DNN model was chosen for the prediction.
DNN is an extension of an artificial neural network (ANN), with a structure that is similar to ANN but with a number of hidden layers [20,21,22,23]. Generally, neural networks that have two or more hidden layers can be regarded as a DNN. Figure 2 shows the structures of ANNs and DNNs.
In this paper, the PMSM stator winding temperature prediction model based on a DNN has nine layers. The first layer is the input layer, and the number of its nodes is equal to the number of input variables X (i.e., 11). The ninth layer (in other words, the last layer) is the output layer. The number of its nodes is equal to the number of output-dependent variable y, which equals 1. The layers from the second to the eighth are hidden layers, and each of them has 14 nodes, respectively. The nodes of the former layer are connected with each node of the latter layer, one by one, and there are no connections between nodes of the same layer. The activation function of a hidden layer is the ReLU function, and the activation function of the output layer is the tanh function. The loss function of the model is the mean squared error (MSE) function, and the back propagation algorithm is the Adam optimization algorithm, whose learning rate is set to 0.001. Figure 3 shows the DNN model constructed in this paper.
The constructed neural network model was used to predict the PMSM stator winding temperature. Figure 4 shows the prediction process. First of all, the data were standardized, then the standardized data were divided into five equal parts by means of five-fold cross validation. Each part was taken as the test set in turn, while the other four parts were used as the training set. The DNN model used the training set to fit the model while the test set was used to predict the stator winding temperature. The results of the DNN model were consolidated; thus, the 6000 pieces of predicted values were obtained. Together with the real values, the metrics of the DNN model could be calculated.

2.2. Assessment of Model

In this paper, the average absolute error (MAE), root mean squared error (RMSE) and goodness of fit (R2) were selected as metrics to evaluate the prediction performance of the model.
MAE is the average value of the absolute error, which reflects the actual situation of the deviation between the predicted value and the real value. MAE is generally used to measure the error of the predicted value deviating from the real value, and the smaller the MAE value, the better the prediction performance. Assuming y t r u e t represents the actual value, y p r e d t represents the predicted value and N represents the number of samples. Equation (1) is the calculation formula of MAE.
M A E = 1 N t = 1 N | y p r e d t y t r u e t |
RMSE is the square root of the ratio of the square of the deviation between the predicted value and the real value and the number of samples. It is often used as a measure of the prediction results of machine learning models. Similar to the MAE, the smaller the RMSE value, the better the prediction performance. Equation (2) is the calculation formula of the RMSE.
R M S E = 1 N t = 1 N ( y t r u e t y p r e d t )
R2 is the goodness of fit, which measures the ability of the prediction model to fit the data. Its range is 0–1. The closer the value to 1, the higher the fitting degree is, and the better the prediction performance of the model is. Equation (3) is the calculation formula of the R2.
R 2 = 1 t = 1 N ( y t r u e t y p r e d t ) 2 t = 1 N ( y t r u e t y a v g ) 2

3. Experimental Results and Discussion

3.1. Introduction of Dataset

The data used in this study were collected from the PMSM placed on a test bench. The PMSM was a German prototype from the original equipment manufacturer. The measuring platform was assembled by the LEA department of Paderborn University [24,25,26,27]. The main purpose of recording the dataset was to simulate the temperatures of the stator and rotor in real time. Figure 5 shows a random selection of 6000 pieces of the data from this dataset.
The main features of the selected dataset are as follows:
  • ambient—the ambient temperature is measured by a temperature sensor located close to the stator;
  • coolant—temperature of the coolant. The motor is cooled by water. The measurement is performed at the outflow of water;
  • u_d—voltage of the d-component;
  • u_q—voltage of the q-component;
  • motor_speed—engine speed;
  • torque—torque induced by current;
  • i_d—current of the d-component;
  • i_q—current of the q-component;
  • pm—surface temperature of the permanent magnet, which is the temperature of the rotor;
  • stator_yoke—the stator yoke temperature is measured using a temperature sensor;
  • stator_tooth—the temperature of the stator tooth is measured using a temperature sensor;
  • stator_winding—the temperature of the stator winding is measured using a temperature sensor.
The test conditions for the dataset were as follows:
  • all recordings were selected at a frequency of 2 Hz (one row in 0.5 s);
  • the engine was accelerated using manually designed driving cycles indicating the reference engine speed and reference torque;
  • currents in the d/q coordinates (columns “i_d” and “i_q”) and voltages in the coordinates d/q (columns “u_d” and “u_q”) were the result of a standard control strategy that tried to follow the reference speed and torque;
  • the columns “motor_speed” and “torque” are the resulting values achieved by this strategy, obtained from the specified currents and voltages.
In machine learning, if the variance of a feature is several orders of magnitude different from other features, it will occupy a dominant position in the learning algorithm, resulting in the learner not being able to learn from other features as expected. Data standardization can readjust the original data so that they have the properties of standard normal distribution. The processing of standardization is shown as Equation (4):
x s t d i = x i μ x δ x
where, x s t d i represents the standardized result, x i represents the original data, μ x represents the mean value and δ x represents the standard deviation.

3.2. Prediction Results of the Model

The specification of the computer used in this experiment was a dual-channel Intel E5 2690 V2 CPU with 128 GB DDR-RAM, which was manufactured by ASUS and made in Taiwan, China. The programming language was Python, and the model used was the DNN model built in Section 3.3. After the dataset in Figure 5 was standardized, 6000 pieces of data were divided into five equal parts by means of five-fold cross validation. Each part was taken as the test set in turn, and the other four parts were used as the training set. Thus, all the data served as both the training set and test set. Taking the stator winding temperature as dependent variable y and the remaining 11 variables as independent variables X, the PMSM stator winding temperature could be predicted.
The curve of the MSE loss value with the number of iterations of the model’s training set is shown in Figure 6. It can be seen that the loss value of the training set decreases rapidly during the first 10 iteration cycles, while it decreases slowly in the later iteration cycles.
Figure 7 shows the prediction results of the model. Figure 7a presents a comparison between the predicted values and the real values of the model. It can be seen that there are many overlapped parts between predicted values and real values. In this paper, the established DNN selected tanh as the activation function of output layer among sigmoid, ReLU (Rectified Linear Unit) and tanh, which had the best performance on the dataset. The output value of the tanh activation function was between −1 and 1, and therefore the values of y_pred in Figure 7a range from −1 to 1, resulting in local saturation when the values of y_true were out of the range. Figure 7b shows a comparison between the predicted and real values after the dataset was normalized, and it can be seen that the curves of y_pred and y_true are almost coincident. Figure 7c,d shows the absolute errors and absolute percentage errors, respectively. The absolute errors are always lower than 0.6 °C, while the absolute percentage errors are barely higher than 10%. The results indicate the model has great prediction performance.

3.3. Comparison with Other DNN Models

The network topology of a DNN consists of the number of hidden layers, a number of neurons in each hidden layer, an activation function of the output layer and hidden layers, the learning rate and so on, all of which play very important roles in the prediction performance of the DNN. A DNN with unsuitable topology will not only increase the training time, but also lead to overfitting or underfitting.
In order to compare it more fairly with other models, instead of employed normalization and anti-normalization methods, we directly used standardized data as input, which could better reflect the superior performance of our proposed model.
In order to study the influence of the different number of hidden layers on the prediction performance, DNNs with two, three, four, five, six, seven (this paper) and eight hidden layers were tested. Other parameters remained at their default values during the test. The corresponding models were constructed for experiments, and RMSE and R2 were selected to evaluate the model. The results are shown in Figure 8. It can be seen that the curve of the RMSE is in the shape of “V”. When the number of hidden layers was two, the maximum RMSE of the model was 0.2375, which then decreased with increases in the number of layers. However, there was an extreme point beyond which the prediction accuracy of the model decreased as the number of layers continued to increase. When there were seven hidden layers, the minimum RMSE of the network was 0.2368 and the maximum R2 was 0.9439. The experimental results verify the superiority of the proposed DNN model in the PMSM stator winding temperature prediction performance.
In order to study the influence of the number of hidden nodes on the prediction performance, DNNs with 10, 12, 14 (this paper), 16, 18 and 20 nodes in each hidden layer are tested. Other parameters remained at their default values during the test. The results are shown in Figure 9. It can be seen that the network with 14 hidden layer nodes had the minimum RMSE of 0.2371 and the maximum R2 of 0.9438. The experimental results verify the superiority of the proposed DNN model in prediction performance.
In order to study the influence of different activation functions on the prediction performance of the model, linear function, tanh function, sigmoid function and ReLU function (this paper) were selected as the activation functions of DNN models, and these models were tested. Other parameters remained at their default values during the test. Results are shown in Table 1. The prediction accuracy of the DNN model using a ReLU function was the highest, with an RMSE of 0.2369 and an R2 of 0.9438. The experimental results verify the superiority of the proposed DNN model in prediction performance.
In order to study the influence of the learning rate of the back propagation algorithm on the prediction performance of the DNN model, six different learning rates were used to establish the models for testing. Other parameters remained at their default values during the test. Since the difference of metrics were too small, six decimal places were reserved. Table 2 shows the results, when the learning rate was 0.001, with a minimum MSE of 0.237080 and a maximum R2 of 0.943793. The experimental results verify the advantages of the proposed DNN model in PMSM stator winding temperature prediction.
In summary, the DNN model proposed in this paper was compared with DNN models with different numbers of hidden layers, different numbers of hidden layer nodes, different activation functions and different learning rates. The model proposed in this paper showed the best prediction performance in all experiments. In practical application, the DNN model proposed in this paper is the first choice for predicting PMSM stator winding temperature.

3.4. Comparison with Machine Learning Methods

In order to build a comparison with other models more fairly, instead of employed normalization and anti-normalization methods, we directly used standardized data as input, which can better reflect the superior performance of our proposed model.
In order to verify the effectiveness of the DNN model in PMSM stator winding temperature prediction, three traditional machine learning methods (i.e., support vector regression (SVR), decision tree and ridge regression), and two ensemble learning methods (i.e., random forest and AdaBoosting) were selected for comparative experiments. The original sample data were trained and verified 50 times for each experiment, and the average of the metrics was calculated. The results are shown in Table 3.
The results show that, when predicting PMSM stator winding temperature, the DNN model proposed in this paper obtained the best performance. The MAE was 0.1515 and the RMSE was 0.2368, thus the prediction error of the model was the least. The R2 is 0.9439, which was the closest to 1, and proves that the performance of the model fitting the real data was the best.
Figure 10 shows the results of three metrics of the models. It can be seen that the MAE and RMSE of the ridge regression model and the DNN model are smaller than those of other models, that the R2 is larger, and that the prediction results of the DNN were better than ridge regression. Compared with the worst model (decision tree), the R2 of the proposed model was improved by about 42%, and it also improved on the ridge regression model by 0.0069. The DNN model proposed in this paper can predict a PMSM stator winding temperature better than other models.

4. Conclusions and Prospections

In order to effectively predict PMSM stator winding temperature, a PMSM stator winding temperature prediction model based on a DNN was proposed in this paper. The model was trained and tested by partial data of a common sample set. Experiments on the number of hidden layers, the number of hidden layer nodes, different activation functions and different learning rates of the DNN models were carried out to verify the superiority of the DNN model proposed in this paper in prediction performance. Additionally, by calculating three metrics (i.e., MAE, RMSE and R2), the prediction performance of the proposed DNN model was compared with other machine learning methods. The summary of this paper is as follows:
  • This paper presented a PMSM stator winding temperature prediction model based on a DNN. The model can be used to solve the problem of how to determine the temperature of PMSM stator winding. It can effectively prevent a series of faults of PMSM due to the high temperature of stator winding. It is of great significance to ensure the safe and reliable operation of the PMSM.
  • The model proposed in this paper was compared with DNN models with different numbers of hidden layers, different numbers of hidden layer nodes, different activation functions and different learning rates, as well as with other machine learning methods. The results of directly inputting the dataset were as follows. The MAE of this model was 0.1515 and the RMSE was 0.2368, which is smaller than other models, while the R2 was 0.9439, which is the closest to 1. The results of employing normalization and anti-normalization methods were also obtained. The MAE was 0.0151, the RMSE was 0.0214 and the R2 was 0.9992. Therefore, this model is more suitable for PMSM stator winding temperature prediction under complex nonlinear conditions.
In conclusion, the DNN model proposed in this paper shows better performance than other machine learning methods in PMSM stator winding temperature prediction. The model can play an important role in PMSM temperature detection system, and provide technical support for temperature warning and the safe operation of PMSMs.
In the future, our next step is to find a lower-cost measurement method and verify the universality of the proposed model in the case of a different power of the same PMSM or different PMSMs. Moreover, we are also considering the construction of a larger sample set to conduct time series prediction for the features of PMSMs.

Author Contributions

H.G. conceived and designed the experiments; H.G., Q.D., Y.S., H.T. and J.Z. designed the regression model and programming. L.W. analyzed the performance of the PMSM. H.G., Q.D., Y.S., H.T., L.W. and J.Z. wrote the manuscript. All authors reviewed the manuscript. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported only by the Science Foundation of Ministry of Education of China (No.18YJCZH040).

Acknowledgments

Authors gratefully acknowledge the helpful comments and suggestions of the reviewers who improved the presentation.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Zhang, H.T.; Dou, M.F.; Deng, J. Loss-Minimization Strategy of Nonsinusoidal Back EMF PMSM in Multiple Synchronous Reference Frames. IEEE Trans. Power Electron. 2020, 35, 8335–8346. [Google Scholar] [CrossRef]
  2. Jafari, M.; Taher, S.A. Thermal survey of core losses in permanent magnet micro-motor. Energy 2017, 123, 579–584. [Google Scholar] [CrossRef]
  3. Chen, S.A.; Jiang, X.D.; Yao, M.; Jiang, S.M.; Chen, J.; Wang, Y.X. A dual vibration reduction structure-based self-powered active suspension system with PMSM-ball screw actuator via an improved H-2/H-infinity control. Energy 2020, 201, 117590. [Google Scholar] [CrossRef]
  4. Wallscheid, O.; Specht, A.; Bcker, J. Observing the Permanent-Magnet Temperature of Synchronous Motors Based on Electrical Fundamental Wave Model Quantities. IEEE Trans. Ind. Electron. 2017, 64, 3921–3929. [Google Scholar] [CrossRef]
  5. Anees, M.; Sinisa, D. FBG Thermal Sensing Ring Scheme for Stator Winding Condition Monitoring in PMSMs. IEEE Trans. Transp. Electrif. 2019, 5, 1370–1382. [Google Scholar]
  6. Anees, M.; Sinisa, D. Open-Circuit Fault Detection in Stranded PMSM Windings Using Embedded FBG Thermal Sensors. IEEE Sens. J. 2019, 19, 3358–3367. [Google Scholar]
  7. Joo, D.; Cho, J.-H.; Woo, K.; Kim, B.-T.; Kim, D.-K. Electromagnetic Field and Thermal Linked Analysis of Interior Permanent-Magnet Synchronous Motor for Agricultural Electric Vehicle. IEEE Trans. Magn. 2011, 47, 4242–4245. [Google Scholar] [CrossRef]
  8. Grobler, A.J.; Holm, S.R.; Van, S.G. A Two-Dimensional Analytic Thermal Model for a High-Speed PMSM Magnet. IEEE Trans. Ind. Electron. 2015, 62, 6756–6764. [Google Scholar] [CrossRef]
  9. Chen, W.G.; Su, X.P.; Chen, X.; Zhou, Q.; Xiao, H.G. Combination of Support Vector Regression with Particle Swarm Optimization for Hot-spot temperature prediction of oil-immersed power transformer. Prz. Elektrotech. 2012, 88, 172–176. [Google Scholar]
  10. Jo, H.; Hwang, H.J.; Phan, D.; Lee, Y.; Jang, H. Endpoint Temperature Prediction model for LD Converters Using Machine-Learning Techniques. In Proceedings of the 2019 IEEE 6th International Conference on Industrial Engineering and Applications, Tokyo, Japan, 14 May 2019; pp. 22–26. [Google Scholar]
  11. Wang, X.J. Ladle Furnace Temperature Prediction Model Based on Large-scale Data with Random Forest. IEEE/CAA J. Autom. Sin. 2017, 4, 770–774. [Google Scholar]
  12. Zhukov, A.; Tomin, N.; Kurbatsky, V.; Sidorov, D.; Panasetsky, D.; Foley, A. Ensemble methods of classification for power systems security assessment. Appl. Comput. Inform. 2019, 15, 45–53. [Google Scholar] [CrossRef] [Green Version]
  13. Su, X.L.; Zhang, S.; Yin, Y.X.; Xiao, W.D. Prediction model of hot metal temperature for blast furnace based on improved multi-layer extreme learning machine. Int. J. Mach. Learn. Cybern. 2019, 10, 2739–2752. [Google Scholar] [CrossRef]
  14. Li, H. Deep learning for natural language processing: Advantages and challenges. Natl. Sci. Rev. 2018, 5, 24–26. [Google Scholar] [CrossRef]
  15. Chinnathambi, R.A.; Plathottam, S.J.; Hossen, T.; Nair, A.S.; Ranganathan, P. Deep Neural Networks (DNN) for Day-Ahead Electricity Price Markets. In Proceedings of the 2018 IEEE Electrical Power and Energy Conference (EPEC), Toronto, ON, Canada, 10 October 2018. [Google Scholar]
  16. Kasburg, C.; Frizzo, S. Deep Learning for Photovoltaic Generation Forecast in Active Solar Trackers. IEEE Lat. Am. Trans. 2019, 17, 2013–2019. [Google Scholar] [CrossRef]
  17. Tao, Q.; Liu, F.; Li, Y.; Sidorov, D. Air Pollution Forecasting Using a Deep Learning Model Based on 1D Convnets and Bidirectional GRU. IEEE Access 2019, 7, 76690–76698. [Google Scholar] [CrossRef]
  18. Sengar, S.; Liu, X. Ensemble approach for short term load forecasting in wind energy system using hybrid algorithm. J. Ambient. Intell. Humaniz. Comput. 2020, 1–18. [Google Scholar] [CrossRef]
  19. Gui, N.; Lou, J.; Zhifeng, Q.; Gui, W. Temporal Feature Selection for Multi-Step Ahead Reheater Temperature Prediction. Processes 2019, 7, 473. [Google Scholar] [CrossRef] [Green Version]
  20. Egrioglu, E.; Yolcu, U.; Bas, E.; Dalar, A.Z. Median-Pi artificial neural network for forecasting. Neural Comput. Appl. 2017, 31, 307–316. [Google Scholar] [CrossRef]
  21. Heo, S.; Lee, J.H. Parallel neural networks for improved nonlinear principal component analysis. Comput Chem. Eng. 2019, 127, 1–10. [Google Scholar] [CrossRef]
  22. Sze, V.; Chen, Y.-H.; Yang, T.-J.; Emer, J.S. Efficient Processing of Deep Neural Networks: A Tutorial and Survey. Proc. IEEE 2017, 105, 2295–2329. [Google Scholar] [CrossRef] [Green Version]
  23. Tao, Q.; Liu, F.; Sidorov, D. Recurrent Neural Networks Application to Forecasting with Two Cases: Load and Pollution. Adv. Intell. Syst. Comput. 2020, 1072, 369–378. [Google Scholar]
  24. Specht, A.; Wallscheid, O.; Boecker, J. Determination of rotor temperature for an interior permanent magnet synchronous machine using a precise flux observer. In Proceedings of the 2014 International Power Electronics Conference (IPEC-ECCE-ASIA), Hiroshima, Japan, 18 May 2014; pp. 1501–1507. [Google Scholar]
  25. Wallscheid, O.; Huber, T.; Peters, W.; Böcker, J. Real-time capable methods to determine the magnet temperature of permanent magnet synchronous motors. In Proceedings of the 2014 40th Annual Conference of the IEEE-Industrial-Electronics-Society (IECON), Dallas, TX, USA, 29 October 2014; pp. 811–818. [Google Scholar]
  26. Wallscheid, O.; Boecker, J. Fusion of direct and indirect temperature estimation techniques for permanent magnet synchronous motors. In Proceedings of the 2017 IEEE International Electric Machines and Drives Conference (IEMDC), Miami, FL, USA, 20 May 2017. [Google Scholar]
  27. Gaona, D.; Wallscheid, O.; Joachim, B. Improved Fusion of Permanent Magnet Temperature Estimation Techniques for Synchronous Motors Using a Kalman Filter. IEEE Trans. Ind. Electron. 2019, 67, 1708–1717. [Google Scholar]
Figure 1. Internal structure of a permanent magnet synchronous motor (PMSM).
Figure 1. Internal structure of a permanent magnet synchronous motor (PMSM).
Energies 13 04782 g001
Figure 2. Structures of an artificial neural network (ANN) and a deep neural network (DNN). (a) A simple example of an ANN. (b) A DNN has two or more hidden layers.
Figure 2. Structures of an artificial neural network (ANN) and a deep neural network (DNN). (a) A simple example of an ANN. (b) A DNN has two or more hidden layers.
Energies 13 04782 g002
Figure 3. PMSM stator winding temperature prediction model.
Figure 3. PMSM stator winding temperature prediction model.
Energies 13 04782 g003
Figure 4. The process of predicting the PMSM stator winding temperature.
Figure 4. The process of predicting the PMSM stator winding temperature.
Energies 13 04782 g004
Figure 5. Dataset of the PMSM. (a) ambient temperature; (b) coolant temperature; (c) u_d; (d) u_q; (e) motor_speed; (f) torque; (g) i_d; (h) i_q; (i) pm; (j) stator_yoke; (k) stator_tooth; (l) stator_winding.
Figure 5. Dataset of the PMSM. (a) ambient temperature; (b) coolant temperature; (c) u_d; (d) u_q; (e) motor_speed; (f) torque; (g) i_d; (h) i_q; (i) pm; (j) stator_yoke; (k) stator_tooth; (l) stator_winding.
Energies 13 04782 g005
Figure 6. Curve of the MSE loss value with iteration times.
Figure 6. Curve of the MSE loss value with iteration times.
Energies 13 04782 g006
Figure 7. Prediction results. (a) Comparison of predicted values and real values. (b) Comparison of predicted values and real values after the dataset was normalized. (c) Absolute errors of predicted values and real values. (d) Absolute percentage errors of predicted values and real values.
Figure 7. Prediction results. (a) Comparison of predicted values and real values. (b) Comparison of predicted values and real values after the dataset was normalized. (c) Absolute errors of predicted values and real values. (d) Absolute percentage errors of predicted values and real values.
Energies 13 04782 g007
Figure 8. Prediction performance of DNN models with different numbers of hidden layers.
Figure 8. Prediction performance of DNN models with different numbers of hidden layers.
Energies 13 04782 g008
Figure 9. Prediction performance of DNN models with different numbers of hidden layer nodes.
Figure 9. Prediction performance of DNN models with different numbers of hidden layer nodes.
Energies 13 04782 g009
Figure 10. Prediction performance of the DNN model and other models.
Figure 10. Prediction performance of the DNN model and other models.
Energies 13 04782 g010
Table 1. Prediction performance of DNN models with different activation functions.
Table 1. Prediction performance of DNN models with different activation functions.
Activation FunctionsRMSER2
linear0.26250.9310
tanh0.23750.9435
sigmoid0.23940.9427
ReLU (this paper)0.23690.9438
Table 2. Prediction performance of DNN models with different learning rates.
Table 2. Prediction performance of DNN models with different learning rates.
Learning RatesTitle 2R2
0.00010.2396200.942582
0.001 (this paper)0.2370800.943793
0.0020.2371140.943777
0.010.2424950.941196
0.020.2783230.922536
0.10.9981200.003756
Table 3. Comparison of metrics.
Table 3. Comparison of metrics.
MethodsMAE 1RMSER2
SVR 20.39960.50940.7405
DecisionTree0.45160.58170.6616
Ridge0.17310.25100.9370
RandomForest0.42030.49370.7562
AdaBoosting0.43540.49370.7563
DNN0.15150.23680.9439
1 mean absolute error. 2 support vector regression.

Share and Cite

MDPI and ACS Style

Guo, H.; Ding, Q.; Song, Y.; Tang, H.; Wang, L.; Zhao, J. Predicting Temperature of Permanent Magnet Synchronous Motor Based on Deep Neural Network. Energies 2020, 13, 4782. https://doi.org/10.3390/en13184782

AMA Style

Guo H, Ding Q, Song Y, Tang H, Wang L, Zhao J. Predicting Temperature of Permanent Magnet Synchronous Motor Based on Deep Neural Network. Energies. 2020; 13(18):4782. https://doi.org/10.3390/en13184782

Chicago/Turabian Style

Guo, Hai, Qun Ding, Yifan Song, Haoran Tang, Likun Wang, and Jingying Zhao. 2020. "Predicting Temperature of Permanent Magnet Synchronous Motor Based on Deep Neural Network" Energies 13, no. 18: 4782. https://doi.org/10.3390/en13184782

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop