Next Article in Journal
Path Optimization Using Metaheuristic Techniques for a Surveillance Robot
Previous Article in Journal
Performance Analysis of an Improved Gravity Anchor Bolt Expanded Foundation
Previous Article in Special Issue
Assessment of AC Corrosion Probability in Buried Pipelines with a FEM-Assisted Stochastic Approach
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A New Intelligent Estimation Method Based on the Cascade-Forward Neural Network for the Electric and Magnetic Fields in the Vicinity of the High Voltage Overhead Transmission Lines

by
Shahin Alipour Bonab
,
Wenjuan Song
and
Mohammad Yazdani-Asrami
*
Propulsion, Electrification & Superconductivity Group, James Watt School of Engineering, University of Glasgow, Glasgow G12 8QQ, UK
*
Author to whom correspondence should be addressed.
Appl. Sci. 2023, 13(20), 11180; https://doi.org/10.3390/app132011180
Submission received: 12 September 2023 / Revised: 8 October 2023 / Accepted: 9 October 2023 / Published: 11 October 2023

Abstract

:
The evaluation and estimation of the electric and magnetic field (EMF) intensity in the vicinity of overhead transmission lines (OHTL) is of paramount importance for residents’ healthcare and industrial monitoring purposes. Using artificial intelligence (AI) techniques makes researchers able to estimate EMF with extremely high accuracy in a significantly short time. In this paper, two models based on the Artificial Neural Network (ANN) have been developed for estimating electric and magnetic fields, i.e., feed-forward neural network (FFNN) and cascade-forward neural network (CFNN). By performing the sensitivity analysis on controlling/hyper-parameters of these two ANN models, the best setup resulting in the highest possible accuracy considering their response time has been chosen. Overall, the CFNN achieved a significant 56% reduction in Root Mean Squared Error (RMSE) for the electric field and a 5% reduction for the magnetic field, compared to the FFNN. This indicates that the CFNN model provided more accurate predictions, particularly for the electric field than the proposed methods in other recent works, making it a promising choice for this application. When the model is trained, it will be tested by a different dataset. Then, the accuracy and response time of the model for new data points of that layout will be evaluated through this process. The model can predict the fields with an accuracy near 99.999% of the actual values in times under 10 ms. Also, the results of sensitivity analysis indicated that the CFNN models with triple and double hidden layers are the best options for the electric and magnetic field estimation, respectively.

1. Introduction

Overhead transmission lines (OHTL) play a crucial role in transmitting electrical power over long distances [1]. They consist of conductors carrying alternating current (AC) from power generation sources to distribution centers and consumers. The generation of electrical and magnetic fields in OHTL is a result of the current flowing through these conductors [2]. The EMF generated by individual conductors in the transmission line combines to form a complex pattern around the entire line [3]. Several reasons make EMF estimation of paramount importance. The major reason is that the EMF has a significant effect on the human’s body [4]. Many studies in recent decades have demonstrated that EMF caused by OHTLs in residential areas is one of the main reasons for the increased incidence of cancer, especially childhood leukemia [5]. Additionally, some studies have reported an increased risk of brain tumors among individuals exposed to prolonged and high-intensity EMFs [6]. Furthermore, it has an extensive effect on the corrosion of buried metallic infrastructures including pipelines, cables, shielding conductors, and grounding systems [7,8,9,10]. Therefore, health scientists, utility companies, and governments have to set limits to prevent future health problems [10]. Many organizations consider 0.4 µT as an acceptable level for long-term exposure to electromagnetic fields, while a few allow for a lower threshold of 0.2 µT as the critical point for leukemia risk [11,12,13]. The National Institute of Environmental Health Sciences and the International Agency for Research on Cancer (IARC), a part of the World Health Organization (WHO), have jointly identified the range of 0.3–0.4 µT as a critical threshold for leukemia risk, categorizing it as Group-B level [14,15,16]. In contrast, the International Non-Ionizing Radiation Committee (ICNIRP) recommends a considerably higher limit of 200 µT for public exposure, which significantly exceeds the recommendations of other reputable health organizations [17]. Also, for the purpose of real-time monitoring of the systems, the engineers need to estimate the EMF with exceptionally low latency [18].
There are some common methods among researchers for EMF estimation. Measuring the EMF in an experiment using sensors is one of the most accurate ways [2]. That said, since in real-world cases the load of OHTL varies, the experiment is not under fully controlled conditions. Also, the change in temperature can affect the height of the conductor in the long-term experiments which will result in changes in the measured fields. This can be overcome by measuring seasonal datasets or annual ones and then creating large datasets and using “big data” techniques and AI to estimate the field values. Moreover, it needs precise instruments and skilled operators to avoid inaccuracies which turns it into an expensive practice [19]. Another means of EMF evaluation is analytical equations which can estimate the EMF with some simplifications in the boundary conditions, limiting them to being merely useful for simple problems [20]. Moreover, EMF can be estimated using numerical finite element methods (FEM). In this method, the entire analysis domain must be divided into small elements where the governing equations will be solved numerically [21]. This means that an enormous number of equations have to be solved, leading to this method being computationally intensive, expensive, and time-consuming [22]. Moreover, FEM accuracy heavily relies on the quantity and quality of meshes that are used to discretize the domain [7]. Achieving optimal mesh refinement requires careful consideration and expertise to balance accuracy and computational costs. For complex real-world geometries or high-resolution simulations, the mesh size should be adapted to consider every dynamic change in the parameters, ensuring accurate results [23,24]. Furthermore, using FEM can be challenging for time-dependent EMF simulations, particularly when dynamic effects and transient behaviors are involved [25]. All these reasons hinder the implementation of this method in certain EMF estimation scenarios and encourage researchers to explore or develop alternative methods.
Artificial intelligence (AI) techniques have been promoted among engineering and physics researchers since the early years of this century [26,27]. These techniques can be useful for EMF estimation due to their ability to handle and learn from complex datasets, provide accurate predictions, and offer several advantages over traditional approaches [28,29]. They also can be retrained by more datasets to update the model to enhance various conditions coverage or improve the accuracy of the model. One of the most renowned types of AI for engineering and physics problems is artificial neural networks (ANN), which has been inspired by the human brain’s learning and decision-making processes [19]. The flexibility and adaptability of ANNs enable them to handle small and large datasets and extract meaningful insights from vast amounts of data/information [30]. Also, ANN’s capability of handling non-linear relationships makes them particularly advantageous in addressing complex and dynamic problems [27]. Moreover, they are able to predict the target parameters with remarkably high accuracy in a noticeably short timeframe. Feed-forward neural network (FFNN) is one of the most common ANN methods among researchers due to its simplicity and high accuracy [13]. Cascade-forward neural network is a variant of ANN that is a more complex version of FFNN due to connecting the input and hidden layers to all preceding layers [31,32]. Due to its naturally complex design, in some cases, it can yield more accurate results than a simple FFNN [33].
In the most recent decade, researchers tried to implement AI methods to predict electric or magnetic fields. In [34], Ekonomou et al. initially provided a setup to measure the EMF to make a dataset for the AI model. Then, they developed a multilayer FFNN model to predict the EMF radiation via electrostatic discharges. The relative error between the predicted and actual value for EMF was reported between 5.437% and 23.620%. In [13], Carlak et al. used a simple multilayer perceptron (MLPNN) and a generalized regression neural network (GRNN) model to predict electric and magnetic fields. They proposed several models for both electric and magnetic fields, each one considering only one longitude position of the conductor. The performance of MLPNN and GRNN models using the Root Mean Squared Error (RMSE) value as the index was reported as 0.030855 and 0.053084 for the electric field and 0.02719 and 0.03666 for the magnetic field, respectively. In [35], Salam et al. implemented single and double-layer models based on FFNN to predict magnetic fields for four substations in Brunei. They trained the models for each of these substations separately and the results indicated that the R-squared value range of their models was from 70.9039% to 98.881%.
In [20], Sivakami et al. suggested a model using a cuckoo search algorithm (CSA) and neuro-fuzzy controller (NFC). First, they used the cuckoo search algorithm as an optimizer to optimize the conductor spacing, which has a significant effect on the intensity of EMF to make an input dataset with minimum electric field intensity for the NFC. This is because they generated the training data using some base equations while considering some simplifications to be able to use those formulas. Finally, by training the NFC with that dataset, they reached a model that was able to estimate the intensity by 5–190% relative error for different data points. In [30], Alihodzic et al. implemented two algorithms, namely the charge simulation method and Biot–Savart law, to generate target values for the electric field intensity and magnetic flux density datasets, respectively. After that, they developed a FFNN model using a Scaled Conjugate Gradient (SCG) as the training function. In another paper with the same process for data collection, Turajilic et al. implemented two FFNN models for each of the magnetic and electric fields. Their models’ accuracy was reported using RMSE and R-squared as indices. For the electric model, RMSE and R-squared were 0.6172 and 0.9121, while for the magnetic field, they were 0.3602 and 0.9471, respectively. However, since these papers used analytical models with some simplifications, the final model might not be accurate enough for real-world study cases [36].
While there have been efforts to estimate or measure EMF near the OHTLs, a significant gap exists in the literature regarding the development of a fast, precise, and experimental-based EMF estimation approach, as opposed to relying solely on conventional analytical models. Recent research has predominantly focused on utilizing FFNN for EMF estimation, resulting in suboptimal accuracy. Consequently, there is a pressing need for a more advanced model capable of effectively handling highly non-linear data. One of the best AI models is the CFNN, renowned for its ability to provide accurate predictions for complex and non-linear problems. The CFNN’s sophistication lies in its capacity to update layer parameters based on the outputs from preceding layers, enabling the model to derive more optimal weight and bias factors, ultimately yielding higher accuracy results. This paper aims to propose the CFNN models for estimating the electric and magnetic fields of OHTLs. These models have been assessed through sensitivity analysis on the effective parameters, ensuring that they are trained with the best setup to reach the most accurate and stable results. In the following, first, the models will be introduced and explained in detail. Then, the sensitivity analysis process will be discussed. In the Section 3, the results of each step of sensitivity analysis will be presented and the performance of both FFNN and CFNN models will be discussed. Finally, a brief conclusion will be made in the Section 4.

2. ANN Materials and Methods

As it has been discussed above, the ANN has been chosen for this case of study as it is extensively flexible to the various datasets, either the large ones with lots of data/observations and effective parameters or smaller ones with limited observations which are not suitable for complex machine learning methods. One of the ANN variants is CFNN, which has been chosen as the main approach for this study. The other method is FFNN, which is immensely popular and commonly used among researchers, and here in this article, has been used for comparison purposes. In the following, these methods will be discussed, and the differences will be highlighted.

2.1. FFNN

2.1.1. Architecture

The FFNN approach can be put into action by employing a multilayer ANN model that comprises interconnected layers of neurons. The input layer receives EMF-related features including the longitude and altitude of the conductor position. Within the hidden layers, which can vary in number and size (the number of neurons), computations are performed on the input data using weighted connections. Each neuron within the hidden layers applies a non-linear activation function to its weighted inputs, enabling the network to capture intricate relationships and non-linearity in the data. The output layer generates estimated EMF values based on the computations conducted in the preceding layers.
The fundamental equations of this methodology are as follows [9]:
y p = f 0 j = 1 n ω i 0 x i f j H i = 1 n ω j i H x i
where f 0 and f j H designate the output layer and the hidden layer activation functions, respectively. Considering the addition of bias to both the input layer and the hidden layer, Equation (1) turns into:
y p = f 0   ω b + j = 1 n ω i 0 x i f j H ω i 0 + i = 1 n ω j i H x i
where ω j H and ω b indicate the respective weight from bias to the hidden layer and output layer.

2.1.2. Training, Validation, and Testing Processes

Training the FFNN involves optimizing the network’s parameters, which are the weights and biases, to make the predicted EMF outputs as close as possible to the actual EMF values. This process is conducted through so-called “backpropagation”, where the network learns from its errors and adjusts its weights and biases accordingly. To help with efficient training, a loss function is used to measure the difference between the predicted and actual EMF values. A common loss function for this is Root Mean Squared Error (RMSE). The network’s parameters are adjusted over multiple iterations using optimization algorithms like a Scaled Conjugate Gradient (SCG), which uses a subset of the training data to calculate how much the weights and biases should be updated.
After training the FFNN, it is essential to evaluate its performance and validate its effectiveness. Performance metrics such as Root Mean Squared Error (RMSE) and the coefficient of determination (R-squared or R 2 ) can be calculated to assess the network’s accuracy in estimating the EMF values. The network’s parameters are adjusted over multiple iterations using optimization algorithms (also known as training functions) like Levenberg–Marquardt, which uses a subset of the training data to calculate how much the weights and biases should be updated.

2.2. CFNN

Unique Features and Architecture

CFNN distinguishes itself from other ANN variants through its sequential learning approach. Unlike the FFNN, where data flow through the layers in a single pass, the CFNN introduces a two-stage learning process. In the first stage, a hidden layer is trained using a traditional feed-forward learning algorithm. Then, in the second stage, additional hidden units are added sequentially in a cascade fashion, each trained to minimize the error remaining from the previous layer. This sequential learning process allows the network to refine its estimations layer by layer, progressively improving accuracy and enhancing the overall estimation performance, which is shown in Figure 1. The addition of the cascade units and its incorporation into the network architecture enables the CFNN with multiple hidden layers to learn complex features in a gradual and systematic manner, further enhancing its capacity for learning intricate patterns and achieving improved performance [33,37]. Therefore, the main difference between CFNN and FFNN is that the number of weight factors in each layer of CFNN increases in a cascade manner. This means that by moving to the next layers, the network will have more weight factors that contribute to the impact of the outputs of all previous layers, while in the FFNN, only one weight factor contributes to the influence of the previous layer (not all of the previous ones).
The related mathematical equation for CFNN can be expressed as [9]:
y p = i = 1 n f i ω i 0 x i + f 0   j = 1 n ω i 0 x i f j H i = 1 n ω j h H x i
where f i and f j H designate the output layer and the hidden layer activation functions, respectively. By adding bias to both the input layer and the hidden layers, Equation (3) can be modified to:
y p = i = 1 n f i ω i 0 x i + f 0   ω b + j = 1 n ω i 0 x i f j H ω i 0 + i = 1 n ω j h H x i
where ω j H and ω b indicate the respective weight from bias to the hidden layer and output layer.

3. Results and Discussion

3.1. Data Collection

The experimental data used in this paper were collected from [13] which provided the data of an experimental measurement on a 154 kV OHTL, where EMF has been measured using sensors in the vicinity of the OHTL. To measure the electric field, a CA42 LF field meter was used in 21 different longitude positions and 5 different heights from ground level. During the measurement period, the recorded instantaneous current value of the transmission line was approximately 156.3 Amperes. In the same process, by using Magnetic Field Hitester 3470 with the magnetic field sensor 3471, the magnetic field was measured.
After data collection, both datasets were organized and preprocessed, making them suitable for use as input datasets for ANN models.

3.2. Error Indices for Evaluating Model Performances

There are three main indices that have been used to assess the accuracy of the different setups of a model as follows [38,39]:
S t a n d a r d   D e v i a t i o n =   i = 1 m y ¯ y i 2 m 1
R M S E = k = 1 n s d k y k 2 n s
R 2 = k = 1 n s d k d ¯ y k y ¯ k = 1 n s d k d ¯ 2 k = 1 n s y k y ¯ 2
In these equations, m represents the number of iterations for each setup, y is the predicted value, y ¯ is mean value of x in m iterations, n s is the number of samples of the training dataset, d k is the actual value, and d k ¯ is the mean value of d k .

3.3. Sensitivity Analysis

The sensitivity of models to their major controlling parameters or so-called “hyper-parameters” should be tested for both electric and magnetic fields, making sure the best architecture/setup has been proposed for the final model. Therefore, a sophisticated approach has been proposed and followed for this step of analysis, which is demonstrated in Figure 2.

3.3.1. Layers

One of the most important parameters that have a significant impact on the accuracy and the training time of the ANN model is the number of hidden layers. For simplicity, most of the researchers consider only one layer for their ANN model. That said, in this work, the five different numbers of hidden layers, starting from the single-layer model to quintuple-layer one, have been tested to ensure that the best number of hidden layers for the dataset has been chosen for further steps. Also, it is good to note that the values of indices of the best setup of neurons in each number of hidden layers have been considered to perfectly make a decision on which one is the best for further steps of analysis. Also, Levenberg–Marquardt as the training function, Purelin–Tansig as the activation function, and 70% as the training ratio have been considered for this step according to [28].

Electric Field

Figure 3 demonstrates that for the CFNN, the more hidden layers a model has, the lower RMSE and the higher R-squared it has. This means the total accuracy of the CFNN model increases by adding more hidden layers, while for the FFNN, after three hidden layers, the RMSE climbs up and the R-squared drops. Therefore, for the FFNN, more than two hidden layers not only does not increase but actually decreases the model’s accuracy. This is a good example of the necessity of sensitivity analysis for AI models.
In terms of the response time, for both models, it soars after three hidden layers. This means that with respect to the application of the model and the computing resources, an appropriate limit for the response time should be chosen. It is good to note that the reported response time in this paper is the time in which the model is able to predict the fields. Therefore, it may vary based on different computers. In this research, the models have been tested on a computer with Intel® Core™ i7-4710HQ CPU with 12 GB RAM.

Magnetic Field

In the aspect of accuracy, the line graphs of Figure 4 indicate that the results of the CFNN and FFNN are very close; however, the CFNN still has better results. The CFNN is facing slight drops in accuracy by adding more hidden layers to the model after two layers, which makes it the best number for it. The same is observed for the FFNN; the two hidden layers have the lowest RMSE and highest R-squared which makes it the best option for it, but it is still not as accurate as the CFNN.
In the aspect of response time, generally, it increases by adding more layers. That said, although the double-layer model has the second lowest one after the single-layer model by a great difference, by taking the accuracy of the models into account, the difference in time can be neglected.

3.3.2. Neurons

The same is observed for the layers; sensitivity analysis on the number of neurons in each hidden layer within ANNs is a critical approach for gaining deeper insights into the network’s internal processes. Each neuron within ANNs plays a vital role in the complex computation and feature extraction process. To explore the optimal configuration, we systematically varied the number of neurons in each hidden layer, ranging from a single neuron to a maximum of 15 neurons. This approach generated 15 possible configurations for each hidden layer. By extending this analysis to encompass multiple hidden layers, denoted as ‘ k ’, we meticulously examined 15 k cases for each k and a total of 813,615 unique conditions. This exhaustive exploration aimed to identify the most efficient setup, so only the best configuration of each   k   is reported in the figures and tables.

Electric Field

As can be seen in Table 1 and Table 2, the best neuron setup of each layer with respect to the RMSE value is shown. It is obvious that the CFNN triple-layer model with [1 3 7] setup of neurons is the optimum configuration as it has extremely low RMSE.

Magnetic Field

According to Table 3 and Table 4, the double-layer models for both CFNN and FFNN with [3 3] and [3 9] setup of neurons, respectively, are the best option for this step of the study.

3.3.3. Training Functions

Sensitivity analysis of training functions in ANNs is a fundamental step in understanding the impact of different optimization algorithms on the network’s learning process and performance. The choice of a suitable training function directly influences the convergence rate, accuracy, and efficiency of the ANN model. In this paper, the four most common training functions (in the literature) have been tested: Levenberg–Marquardt (LM), Scaled Conjugate Gradient (SCG), Resilient Backpropagation (RB), and Variable Learning Rate Backpropagation (VLRB). Analyzing the sensitivity of these training functions provides valuable insights into their strengths, weaknesses, and suitability for the proposed model.

Electric Field

As can be seen in Figure 5, the LM training function has the lowest RMSE in comparison to other training functions. In contrast, its best setup has a higher response time than the others. Also, the CFNN method has more accurate results than the FFNN in all training functions. Therefore, the best model at this step is the CFNN trained with LM.

Magnetic Field

Similar to the electric field, according to Figure 6, the LM has the lowest RMSE and highest R-squared. Also, the CFNN model’s RMSE is less than the FFNN for all training functions. Moreover, in terms of response time, there is no significant difference between all the options. So, the best option is the CFNN method with the LM training function.

3.3.4. Activation Functions

Activation functions introduce non-linearity to the neural network, enabling it to model complex relationships and learn intricate patterns from the data. In this paper, four activation functions including Purelin, Tansig, Satlin, and Logsig have been considered for this step of analysis as the most common functions used in the literature. The equations of these activation functions are as follows [40,41]:
Pure   linear F x = x
Saturated   linear F x = 0   f o r   x < 0 1   f o r   x > 1
Hyperbolic   tan gent sigmoid   F x = t a n h x
Log sigmoid F ( x ) = 1 1 + e x p ( x )
It was considered that the activation functions between the input and hidden layer, and the hidden layer and hidden layer are the same. Also, eight combinations of activation functions out of all possible conditions were chosen from [19]. Then, they were assessed to check whether they result in higher accuracy than others.
As can be seen in Figure 7 and Table 5, for the electric field, Purelin–Logsig for CFNN and Tansig–Purelin for FFNN resulted in the lowest RMSE and highest R 2 , respectively. Also, according to Figure 8 and Table 6, Logsig–Purelin for both the CFNN and FFNN demonstrated the highest accuracy. Also, it is worth noting that the CFNN has a higher R 2 than the FFNN, while it has lower RMSE values. This means that the CFNN gives more accurate estimates of both electric and magnetic fields in most pairs of activation functions.

3.3.5. Training Ratio

The sensitivity of the model to the training data ratio must be assessed by considering different ratios for the number of training data to the total data in the dataset of the model. It is common practice among AI researchers to consider a data training ratio between 50% to 90% for the training of the model and 5% to 25% for each validation and testing process equally. This ratio depends on the number of observations in a dataset, the number of input parameters, etc.
In this paper, the data training ratios were considered for 50%, 60%, and 70% to find the best ratio in terms of the accuracy of the model. The reason that 80% and 90% were not studied is that there was a risk of overfitting the model. The results of Table 7 and Table 8 demonstrate that 70% training ratio gave the highest accuracy for both electric and magnetic fields.

3.3.6. Stability

The performance variability observed in ANN models, even when the network setup remains unchanged, is a common phenomenon and can be attributed to several factors. First and foremost, ANNs inherently depend on the initial weights assigned to their connections, which are typically initialized randomly. This initial condition can lead to different starting points for the learning process, resulting in divergent outcomes during training. Additionally, the data used for training play a crucial role. The dataset splitting for the training, validation, and testing process is performed randomly, which can result in different model performances. Stability analysis holds significant importance for ANN models, as it guarantees the robustness and dependability of their predictions. In this paper, each setup/configuration for the ANN model, regardless of being in each step of sensitivity analysis, was repeated 50 times to avoid any significant fluctuation. Then, the mean values of RMSE and R-squared were compared between models to assess their accuracy. As can be seen in Figure 9 and Figure 10, the RMSE values are almost the same and fluctuations are not significant. Also, the SD of the data for electric field and magnetic field models was evaluated using Equation (1) and is only 4.88 × 10−4 and 4.81 × 10−3, respectively. As the SD values are near zero, they prove that the fluctuation of RMSE in both suggested models is negligible, and so, the suggested models are absolutely stable.

3.4. Comparison with Other Works

In this section, the accuracy of the presented model in this paper will be compared to the previous works published in the literature. As can be seen with Table 9, the RMSE of the proposed model in this paper is the lowest in comparison with other published papers. Also, in terms of R-squared, the proposed model in this paper has an R-squared of almost 0.999, which is higher than other published papers. Moreover, in this paper, relative error never exceeded 5.4% (worst case), while for other published papers, it was reported between 3% to 190%.

4. Conclusions

Both finite element and experimental methods that are being used by researchers for the electric and magnetic fields evaluation are extensively time-consuming and expensive. This paper intends to propose an extremely fast and low-cost implementation method using AI methods based on neural networks. The cascade-forward neural network (CFNN) as the main ANN method of this paper demonstrated higher accuracy than the commonly used feed-forward neural network (FFNN). For the electric and magnetic fields estimation, the CFNN exhibited a reduction in RMSE by 56% and 5%, respectively, compared to the FFNN. The other important findings can be expressed as follows:
  • The training time of both models did not exceed 10 s, while it can take some days for the experimental and FEM methods.
  • The response times of both proposed models were less than 10 ms, even using a regular personal computer. Therefore, they are very suitable for real-time use.
  • Although the CFNN models have more complex architecture, they had almost the same response time to the FFNN, with higher accuracy.
It is worth noting that CFNN models are versatile and can handle various datasets in many engineering applications. These models have been developed for one layout and have very high accuracy for that layout. To reach similar accuracy for other layouts, the models can be retrained and updated with new datasets related to them.

Author Contributions

Conceptualization, M.Y.-A.; methodology, S.A.B., W.S. and M.Y.-A.; formal analysis, S.A.B. and M.Y.-A.; resources, W.S. and M.Y.-A.; data curation, S.A.B.; writing—original draft preparation, S.A.B. and M.Y.-A.; writing—review and editing, W.S. and M.Y.-A.; funding acquisition, M.Y.-A.; supervision, W.S. and M.Y.-A. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Data are available on request due to restrictions, e.g., privacy or ethical reasons.

Acknowledgments

For the purpose of open access, the author(s) has applied a Creative Commons Attribution (CC BY) license to any Author Accepted Manuscript version arising from this submission.

Conflicts of Interest

The authors declare no conflict of interest.

Nomenclature

AbbreviationDescription
EMFElectric and magnetic field
OHTLOverhead Transmission Line
AIArtificial Intelligence
ANNArtificial Neural Network
FFNNFeed-Forward Neural Network
CFNNCascade-Forward Neural Network
LMLevenberg–Marquardt
SCGScaled Conjugate Gradient
RBResilient Backpropagation
VLRBVariable Learning Rate Backpropagation
FEMFinite Element Method
MSEMean Squared Error
RMSERoot Mean Squared Error
R-Squaredcoefficient of determination
SDStandard Deviation

References

  1. Yuan, S.; Huang, Y.; Zhou, J.; Xu, Q.; Song, C.; Thompson, P. Magnetic Field Energy Harvesting under Overhead Power Lines. IEEE Trans. Power Electron. 2015, 30, 6191–6202. [Google Scholar] [CrossRef]
  2. Khawaja, A.H.; Huang, Q.; Li, J.; Zhang, Z. Estimation of Current and Sag in Overhead Power Transmission Lines with Optimized Magnetic Field Sensor Array Placement. IEEE Trans. Magn. 2017, 53, 6100210. [Google Scholar] [CrossRef]
  3. Dein, A.Z.E.; Gouda, O.E.; Lehtonen, M.; Darwish, M.M.F. Mitigation of the Electric and Magnetic Fields of 500-kV Overhead Transmission Lines. IEEE Access 2022, 10, 33900–33908. [Google Scholar] [CrossRef]
  4. Forssén, U.M.; Lönn, S.; Ahlbom, A.; Savitz, D.A.; Feychting, M. Occupational magnetic field exposure and the risk of acoustic neuroma. Am. J. Ind. Med. 2006, 49, 112–118. [Google Scholar] [CrossRef] [PubMed]
  5. Greenland, S.; Sheppard, A.R.; Kaune, W.T.; Poole, C.; Kelsh, M.A. A Pooled Analysis of Magnetic Fields, Wire Codes, and Childhood Leukemia. Epidemiology 2000, 11, 624–634. [Google Scholar] [CrossRef] [PubMed]
  6. Delaplace, L.R.; Group, W.; Reilly, J.P. Electric and Magnetic Field Coupling from High Voltage Ac Power Transmission Lines-Classification of Short-Term Effects on People. IEEE Trans. Power Appar. Syst. 1978, 6, 2243–2252. [Google Scholar] [CrossRef]
  7. Lucca, G.; Sandrolini, L.; Popoli, A.; Simonazzi, M.; Cristofolini, A. Assessment of AC Corrosion Probability in Buried Pipelines with a FEM-Assisted Stochastic Approach. Appl. Sci. 2023, 13, 7669. [Google Scholar] [CrossRef]
  8. Wan, L.; Negri, S.; Spadacini, G.; Grassi, F.; Pignari, S.A. Enhanced Impedance Measurement to Predict Electromagnetic Interference Attenuation Provided by EMI Filters in Systems with AC/DC Converters. Appl. Sci. 2022, 12, 12497. [Google Scholar] [CrossRef]
  9. Ituabhor, O.; Isabona, J.; Zhimwang, J.T.; Risi, I. Cascade Forward Neural Networks-based Adaptive Model for Real-time Adaptive Learning of Stochastic Signal Power Datasets. Int. J. Comput. Netw. Inf. Secur. 2022, 14, 63–74. [Google Scholar] [CrossRef]
  10. Abdel-Gawad, N.M.K.; El Dein, A.Z.; Magdy, M. Mitigation of induced voltages and AC corrosion effects on buried gas pipeline near to OHTL under normal and fault conditions. Electr. Power Syst. Res. 2015, 127, 297–306. [Google Scholar] [CrossRef]
  11. Ahlbom, A.; Day, N.; Feychting, M.; Roman, E.; Skinner, J.; Dockerty, J.; Linet, M.; McBride, M.; Michaelis, J.; Olsen, J.H.; et al. A pooled analysis of magnetic fields and childhood leukaemia. Br. J. Cancer 2000, 83, 692–698. [Google Scholar] [CrossRef] [PubMed]
  12. Coleman, M.P.; Bell, C.M.J.; Taylor, H.L.; Primic-Zakelj, M. Leukaemia and residence near electricity transmission equipment: A case-control study. Br. J. Cancer 1989, 60, 793. [Google Scholar] [CrossRef] [PubMed]
  13. Carlak, H.F.; Özen, Ş.; Bilgin, S. Low-frequency exposure analysis using electric and magnetic field measurements and predictions in the proximity of power transmission lines in urban areas. Turk. J. Electr. Eng. Comput. Sci. 2017, 25, 3994–4005. [Google Scholar] [CrossRef]
  14. Abdallah, M.A.; Mahmoud, S.A.; Anis, H.I. Interaction of Environmental ELF Electromagnetic Fields with Living Bodies. Electr. Mach. Power Syst. 2000, 28, 301–312. [Google Scholar] [CrossRef]
  15. Helhel, S.; Ozen, S. Assessment of occupational exposure to magnetic fields in high-voltage substations (154/34.5 kV). Radiat. Prot. Dosim. 2007, 128, 464–470. [Google Scholar] [CrossRef] [PubMed]
  16. McKinlay, A.; Repacholi, M. Effects of static magnetic fields relevant to human health. Prog. Biophys. Mol. Biol. 2005, 87, i. [Google Scholar] [CrossRef]
  17. Matthes, R.; Bernhardt, J.H.; McKinlay, A.F. International Commission on Non-Ionizing Radiation Protection. Guidelines on Limiting Exposure to Non-Ionizing Radiation: A Reference Book Based on the Guidelines on Limiting Exposure to Non-Ionizing Radiation and Statements on Special Applications. International Commission on Non-Ionizing Radiation Protection. 1999. [Google Scholar]
  18. Khawaja, A.H.; Huang, Q.; Khan, Z.H. Monitoring of Overhead Transmission Lines: A Review from the Perspective of Contactless Technologies. Sens. Imaging 2017, 18, 24. [Google Scholar] [CrossRef]
  19. Yazdani-Asrami, M.; Sadeghi, A.; Song, W. Ultra-fast Surrogate Model for Magnetic Field Computation of a Superconducting Magnet Using Multi-layer Artificial Neural Networks. J. Supercond. Nov. Magn. 2023, 36, 575–586. [Google Scholar] [CrossRef]
  20. Sivakami, P.; Subburaj, P. EMF estimation of over head transmission line using CS algorithm with aid of NFC. Int. J. Electr. Eng. Inform. 2016, 8, 624–643. [Google Scholar] [CrossRef]
  21. Xu, R.; Zhu, H.; Yuan, J. Electric-field intrabody communication channel modeling with finite-element method. IEEE Trans. Biomed. Eng. 2011, 58 Pt 1, 705–712. [Google Scholar] [CrossRef]
  22. Sizov, G.Y.; Ionel, D.M.; Demerdash, N.A.O. Modeling and parametric design of permanent-magnet AC machines using computationally efficient finite-element analysis. IEEE Trans. Ind. Electron. 2012, 59, 2403–2413. [Google Scholar] [CrossRef]
  23. Kafafy, R.; Lin, T.; Lin, Y.; Wang, J. Three-dimensional immersed finite element methods for electric field simulation in composite materials. Int. J. Numer. Methods Eng. 2005, 64, 940–972. [Google Scholar] [CrossRef]
  24. Nekhoul, B.; Guerin, C.; Labie, P.; Meunier, G.; Feuillet, R.; Bnmotte, X. A Finite Element Method for Calculating the Electromagnetic Fields Generated by Substation Grounding Systems. IEEE Trans. Magn. 1995, 31, 2150–2153. [Google Scholar] [CrossRef]
  25. Benguesmia, H.; M’Ziou, N.; Boubakeur, A. Simulation of the potential and electric field distribution on high voltage insulator using the finite element method. Diagnostyka 2018, 19, 41–52. [Google Scholar] [CrossRef]
  26. Yazdani-Asrami, M.; Song, W.; Morandi, A.; De Carne, G.; Murta-Pina, J.; Pronto, A.; Oliveira, R.; Grilli, F.; Pardo, E.; Parizh, M.; et al. Roadmap on artificial intelligence and big data techniques for superconductivity. Supercond. Sci. Technol. 2023, 36, 043501. [Google Scholar] [CrossRef]
  27. Yazdani-Asrami, M.; Sadeghi, A.; Song, W.; Madureira, A.; Murta-Pina, J.; Morandi, A.; Parizh, M. Artificial intelligence methods for applied superconductivity: Material, design, manufacturing, testing, operation, and condition monitoring. Supercond. Sci. Technol. 2022, 35, 123001. [Google Scholar] [CrossRef]
  28. Yazdani-Asrami, M.; Sadeghi, A.; Seyyedbarzegar, S.; Song, W. DC Electro-Magneto-Mechanical Characterization of 2G HTS Tapes for Superconducting Cable in Magnet System Using Artificial Neural Networks. IEEE Trans. Appl. Supercond. 2022, 32, 4605810. [Google Scholar] [CrossRef]
  29. Yazdani-Asrami, M.; Sadeghi, A.; Seyyedbarzegar, S.M.; Saadat, A. Advanced experimental-based data-driven model for the electromechanical behavior of twisted YBCO tapes considering thermomagnetic constraints. Supercond. Sci. Technol. 2022, 35, 054004. [Google Scholar] [CrossRef]
  30. Alihodzic, A.; Mujezinovic, A.; Turajlic, E. Electric and Magnetic Field Estimation under Overhead Transmission Lines Using Artificial Neural Networks. IEEE Access 2021, 9, 105876–105891. [Google Scholar] [CrossRef]
  31. Alkhasawneh, M.S.; Tay, L.T. A Hybrid Intelligent System Integrating the Cascade Forward Neural Network with Elman Neural Network. Arab. J. Sci. Eng. 2018, 43, 6737–6749. [Google Scholar] [CrossRef]
  32. Alkhasawneh, M.S. Hybrid Cascade Forward Neural Network with Elman Neural Network for Disease Prediction. Arab. J. Sci. Eng. 2019, 44, 9209–9220. [Google Scholar] [CrossRef]
  33. Mohammadi, M.R.; Hemmati-Sarapardeh, A.; Schaffie, M.; Husein, M.M.; Ranjbar, M. Application of cascade forward neural network and group method of data handling to modeling crude oil pyrolysis during thermal enhanced oil recovery. J. Pet. Sci. Eng. 2021, 205, 108836. [Google Scholar] [CrossRef]
  34. Ekonomou, L.; Fotis, G.P.; Maris, T.I.; Liatsis, P. Estimation of the electromagnetic field radiating by electrostatic discharges using artificial neural networks. Simul. Model. Pract. Theory 2007, 15, 1089–1102. [Google Scholar] [CrossRef]
  35. Salam, M.A.; Ang, S.P.; Rahman, Q.M.; Malik, O.A. Estimation of Magnetic Field Strength near Substation Using Artificial Neural Network. Int. J. Electron. Electr. Eng. 2016, 4, 166–171. [Google Scholar] [CrossRef]
  36. Turajlic, E.; Alihodzic, A.; Mujezinovic, A. Artificial Neural Network Models for Estimation of Electric Field Intensity and Magnetic Flux Density in The Proximity of Overhead Transmission Line. Radiat. Prot. Dosim. 2023, 199, 107–115. [Google Scholar] [CrossRef] [PubMed]
  37. Alzayed, M.; Chaoui, H.; Farajpour, Y. Maximum Power Tracking for a Wind Energy Conversion System Using Cascade-Forward Neural Networks. IEEE Trans. Sustain. Energy 2021, 12, 2367–2377. [Google Scholar] [CrossRef]
  38. Sadeghi, A.; Seyyedbarzegar, S.M.; Yazdani-Asrami, M. Transient analysis of a 22.9 kV/2 kA HTS cable under short circuit using equivalent circuit model considering different fault parameters. Phys. C Supercond. Appl. 2021, 589, 1353935. [Google Scholar] [CrossRef]
  39. Lee, D.K.; In, J.; Lee, S. Standard deviation and standard error of the mean. Korean J. Anesth. 2015, 68, 220–223. [Google Scholar] [CrossRef]
  40. Yazdani-Asrami, M.; Fang, L.; Pei, X.; Song, W. Smart fault detection of HTS coils using artificial intelligence techniques for large-scale superconducting electric transport applications. Supercond. Sci. Technol. 2023, 36, 085021. [Google Scholar] [CrossRef]
  41. Russo, G.; Yazdani-Asrami, M.; Scheda, R.; Morandi, A.; Diciotti, S. Artificial intelligence-based models for reconstructing the critical current and index-value surfaces of HTS tapes. Supercond. Sci. Technol. 2022, 35, 124002. [Google Scholar] [CrossRef]
Figure 1. Schematic of the networks of FFNN and CFNN.
Figure 1. Schematic of the networks of FFNN and CFNN.
Applsci 13 11180 g001
Figure 2. Flowchart of the sensitivity analysis process.
Figure 2. Flowchart of the sensitivity analysis process.
Applsci 13 11180 g002
Figure 3. Sensitivity analysis on the number of hidden layers in FFNN and CFNN for the electric field. (a) RMSE values; (b) R-squared values; (c) response time.
Figure 3. Sensitivity analysis on the number of hidden layers in FFNN and CFNN for the electric field. (a) RMSE values; (b) R-squared values; (c) response time.
Applsci 13 11180 g003aApplsci 13 11180 g003b
Figure 4. Hidden layers analysis for both FFNN and CFNN for the magnetic field. (a) RMSE; (b) R-squared; (c) response time.
Figure 4. Hidden layers analysis for both FFNN and CFNN for the magnetic field. (a) RMSE; (b) R-squared; (c) response time.
Applsci 13 11180 g004aApplsci 13 11180 g004b
Figure 5. Sensitivity Analysis on the training functions for both FFNN and CFNN for triple-layer models for the electric field. (a) RMSE (b) R-squared (c) Response time.
Figure 5. Sensitivity Analysis on the training functions for both FFNN and CFNN for triple-layer models for the electric field. (a) RMSE (b) R-squared (c) Response time.
Applsci 13 11180 g005
Figure 6. Sensitivity Analysis on the training for both FFNN and CFNN for triple-layer models for the magnetic field. (a) RMSE (b) R-squared (c) Response time.
Figure 6. Sensitivity Analysis on the training for both FFNN and CFNN for triple-layer models for the magnetic field. (a) RMSE (b) R-squared (c) Response time.
Applsci 13 11180 g006aApplsci 13 11180 g006b
Figure 7. RMSE values of the pairs of activation functions for both FFNN and CFNN for the electric field.
Figure 7. RMSE values of the pairs of activation functions for both FFNN and CFNN for the electric field.
Applsci 13 11180 g007
Figure 8. RMSE values of the pairs of Activation functions for both FFNN and CFNN for the magnetic field.
Figure 8. RMSE values of the pairs of Activation functions for both FFNN and CFNN for the magnetic field.
Applsci 13 11180 g008
Figure 9. RMSE distribution chart for 50 iterations for the electric field best model.
Figure 9. RMSE distribution chart for 50 iterations for the electric field best model.
Applsci 13 11180 g009
Figure 10. RMSE distribution chart for 50 iterations for the magnetic field best model.
Figure 10. RMSE distribution chart for 50 iterations for the magnetic field best model.
Applsci 13 11180 g010
Table 1. Sensitivity Analysis of Layers and Neurons for FFNN for the electric field.
Table 1. Sensitivity Analysis of Layers and Neurons for FFNN for the electric field.
LayersNeuronsRMSER-SquaredResponse Time [ms]
1110.0062170.9999366.651
2[3 15]0.0039990.9999788.621
3[5 7 7]0.0043810.9999809.362
4[5 9 13 5]0.0066960.99995612.652
5[5 9 9 13 5]0.0061740.99992014.598
Table 2. Sensitivity Analysis of Layers and Neurons for CFNN for the electric field.
Table 2. Sensitivity Analysis of Layers and Neurons for CFNN for the electric field.
LayersNeuronsRMSER-SquaredResponse Time [ms]
1130.0100500.9998176.543
2[1 11]0.0027700.9999848.467
3[1 3 7]0.0019530.9999939.151
4[1 3 3 3]0.0020130.99999411.206
5[2 2 2 4 2]0.0017630.99999412.797
Table 3. Sensitivity analysis of Layers and Neurons for FFNN for the magnetic field.
Table 3. Sensitivity analysis of Layers and Neurons for FFNN for the magnetic field.
LayersNeuronsRMSER-SquaredResponse Time [ms]
164.97 × 10−29.99 × 10−13.27
2[3 9]2.53 × 10−29.99 × 10−14.70
3[5 3 3]2.66 × 10−29.99 × 10−16.31
4[5 5 9 9]3.33 × 10−29.99 × 10−15.04
5[5 5 13 5 1]2.85 × 10−29.99 × 10−15.25
Table 4. Sensitivity analysis of Layers and Neurons for CFNN for the magnetic field.
Table 4. Sensitivity analysis of Layers and Neurons for CFNN for the magnetic field.
LayersNeuronsRMSER-SquaredResponse Time [ms]
154.87 × 10−29.95 × 10−13.32
2[3 3]2.46 × 10−29.99 × 10−14.59
3[3 7 1]2.53 × 10−29.83 × 10−17.48
4[5 5 1 5]2.77 × 10−29.99 × 10−17.67
5[5 5 9 5 5]3.13 × 10−29.99 × 10−16.84
Table 5. R-squared values of the pairs of activation functions for both FFNN and CFNN for the electric field.
Table 5. R-squared values of the pairs of activation functions for both FFNN and CFNN for the electric field.
Activation FunctionFFNNCFNN
Purelin–Tansig0.9999570.999995
Tansig–Purelin0.9999970.999979
Satlin–Tansig0.9817590.99983
Tansig–Satlin0.9987970.999928
Purelin–Logsig0.8591710.999979
Logsig–Purelin0.9983930.999986
Satlin–Logsig0.9011350.999884
Logsig–Satlin0.9998660.999949
Table 6. R-squared values of the pairs of activation functions for both FFNN and CFNN for the magnetic field.
Table 6. R-squared values of the pairs of activation functions for both FFNN and CFNN for the magnetic field.
Activation FunctionFFNNCFNN
Purelin–Tansig0.9952100.995201
Tansig–Purelin0.9992310.999198
Satlin–Tansig0.8973530.998647
Tansig–Satlin0.6214610.885737
Purelin–Logsig0.8673730.867271
Logsig–Purelin0.9992800.999177
Satlin–Logsig0.6320670.879440
Logsig–Satlin0.5337370.886112
Table 7. The RMSE and R-squared of different data training ratios for the electric field estimation.
Table 7. The RMSE and R-squared of different data training ratios for the electric field estimation.
Training RatioRMSER-Squared
70%1.58 × 10−30.999979
60%1.71 × 10−30.999995
50%1.82 × 10−30.999992
Table 8. The RMSE and R-squared of different data training ratios for the magnetic field estimation.
Table 8. The RMSE and R-squared of different data training ratios for the magnetic field estimation.
Training RatioRMSER-Squared
70%2.49 × 10−20.999177
60%2.62 × 10−20.999177
50%2.96 × 10−20.998943
Table 9. Comparison of accuracy of the proposed model with other works.
Table 9. Comparison of accuracy of the proposed model with other works.
ReferenceMethodFieldRMSER2Relative Error
[34]MLPNNElectric--5.437–23.62%
Magnetic--3.255–11.5%
[13]MLPNNElectric0.030855--
Magnetic0.02719--
[13]GRNNElectric0.053084--
Magnetic0.03666--
[20]NFCElectric--8–135%
Magnetic--5–190%
[36]FFNNElectric0.61720.9121-
Magnetic0.36020.9471-
[35]FFNNElectric---
Magnetic-0.709–0.988-
Present paperCFNNElectric0.0017080.999950.01–3.281%
Magnetic0.02460.999200.05–5.87%
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Alipour Bonab, S.; Song, W.; Yazdani-Asrami, M. A New Intelligent Estimation Method Based on the Cascade-Forward Neural Network for the Electric and Magnetic Fields in the Vicinity of the High Voltage Overhead Transmission Lines. Appl. Sci. 2023, 13, 11180. https://doi.org/10.3390/app132011180

AMA Style

Alipour Bonab S, Song W, Yazdani-Asrami M. A New Intelligent Estimation Method Based on the Cascade-Forward Neural Network for the Electric and Magnetic Fields in the Vicinity of the High Voltage Overhead Transmission Lines. Applied Sciences. 2023; 13(20):11180. https://doi.org/10.3390/app132011180

Chicago/Turabian Style

Alipour Bonab, Shahin, Wenjuan Song, and Mohammad Yazdani-Asrami. 2023. "A New Intelligent Estimation Method Based on the Cascade-Forward Neural Network for the Electric and Magnetic Fields in the Vicinity of the High Voltage Overhead Transmission Lines" Applied Sciences 13, no. 20: 11180. https://doi.org/10.3390/app132011180

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop