Next Article in Journal
Long-Term Prediction Model for NOx Emission Based on LSTM–Transformer
Next Article in Special Issue
Exploring the Landscape of AI-SDN: A Comprehensive Bibliometric Analysis and Future Perspectives
Previous Article in Journal
Machine-Learning-Based Vulnerability Detection and Classification in Internet of Things Device Security
Previous Article in Special Issue
Deep Learning-Based Attack Detection and Classification in Android Devices
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Predictive Modeling of Signal Degradation in Urban VANETs Using Artificial Neural Networks

1
Department of Computer Science, University of Quebec in Outaouais (UQO), 283 Boul. Alexandre-Taché, Gatineau, QC J8X 3X7, Canada
2
Department of Computer Science, University of Ngaoundéré, Ngaoundéré P.O. Box 454, Cameroon
*
Author to whom correspondence should be addressed.
Electronics 2023, 12(18), 3928; https://doi.org/10.3390/electronics12183928
Submission received: 2 August 2023 / Revised: 10 September 2023 / Accepted: 14 September 2023 / Published: 18 September 2023
(This article belongs to the Special Issue AI Used in Mobile Communications and Networks)

Abstract

:
In urban Vehicular Ad Hoc Network (VANET) environments, buildings play a crucial role as they can act as obstacles that attenuate the transmission signal between vehicles. Such obstacles lead to multipath effects, which could substantially impact data transmission due to fading. Therefore, quantifying the impact of buildings on transmission quality is a key parameter of the propagation model, especially in critical scenarios involving emergency vehicles where reliable communication is of utmost importance. In this research, we propose a supervised learning approach based on Artificial Neural Networks (ANNs) to develop a predictive model capable of estimating the level of signal degradation, represented by the Bit Error Rate (BER), based on the obstacles perceived by moving emergency vehicles. By establishing a relationship between the level of signal degradation and the encountered obstacles, our proposed mechanism enables efficient routing decisions being made prior to the transmission process. Consequently, data packets are routed through paths that exhibit the lowest BER. To collect the training data, we employed Network Simulator 3 (NS-3) in conjunction with the Simulation of Urban MObility (SUMO) simulator, leveraging real-world data sourced from the OpenStreetMap (OSM) geographic database. OSM enabled us to gather geospatial data related to the Two-Dimensional (2D) geometric structure of buildings, which served as input for our Artificial Neural Network (ANN). To determine the most suitable algorithm for our ANN, we assessed the accuracy of ten learning algorithms in MATLAB, utilizing five key metrics: Mean Squared Error (MSE), Root Mean Squared Error (RMSE), Mean Absolute Error (MAE), Correlation Coefficient (R), and Maximum Prediction Error (MaxPE). For each algorithm, we conducted fifteen iterations based on ten hidden neurons and gauged its accuracy against the aforementioned metrics. Our analysis highlighted that the ANN underpinned by the Conjugate Gradient With Powell/Beale Restarts (CGB) learning algorithm exhibited superior performance in terms of MSE, RMSE, MAE, R, and MaxPE compared to other algorithms such as Levenberg–Marquardt (LM), Bayesian Regularization (BR), BFGS Quasi-Newton (BFG), Resilient Backpropagation (RP), Scaled Conjugate Gradient (SCG), Fletcher–Powell Conjugate Gradient (CGF), Polak–Ribiére Conjugate Gradient (CGP), One-Step Secant (OSS), and Variable Learning Rate Backpropagation (GDX). The BER prediction by our ANN incorporates the TWO-RAY Ground (TRG) propagation model, an adjustable parameter within NS-3. When subjected to 300 new samples, the trained ANN’s simulation outcomes illustrated its capability to learn, generalize, and successfully predict the BER for a new data instance. Overall, our research contributes to enhancing the performance and reliability of communication in urban VANET environments, especially in critical scenarios involving emergency vehicles, by leveraging supervised learning and artificial neural networks to predict signal degradation levels and optimize routing decisions accordingly.

1. Introduction

In the dynamic realm of vehicular technology, inter-vehicular communications serve as a foundational pillar, driving advancements in road safety, traffic management, and passenger experience. Enabled by Vehicular Ad-Hoc Networks (VANETs) [1], these communications are facilitated through On-Board Unit (OBU) devices that vehicles are equipped with. The primary communication paradigms include Vehicle-to-Vehicle (V2V) and Vehicle-to-Infrastructure (V2I). The V2I paradigm relies on strategically placed Roadside Units (RSUs) to ensure uninterrupted data packet transmission. However, the dynamic nature of VANETs, marked by frequent topological changes due to vehicular speeds and mobility, poses challenges in data transmission. These challenges include intermittent connection disruptions, interference in high-traffic areas, and signal attenuation due to obstacles. A significant concern is the potential for delay in data packet transmission from an emergency vehicle, which could jeopardize a patient’s timely arrival at a medical facility. While there is extensive literature on channel characterization across urban, rural, and suburban areas [2,3,4,5,6], only a few have explored the potential of ANN, stochastic models, and machine learning for channel modeling [7,8,9,10]. For example, Ref. [10] presents a detailed physical layer simulation tool in Simulink, specifically designed for vehicular communication modeling.

1.1. Problem Statement

The central challenge revolves around ensuring reliable data packet transmission paths, especially for emergency vehicles traversing intricate urban landscapes. Buildings and other infrastructural elements can significantly weaken signals, resulting in transmission errors. While traditional models offer some solutions, they may not fully capture the complexities of urban environments. This underscores the need for an advanced solution capable of accurately predicting transmission errors, thus facilitating proactive measures to maintain seamless communication.

1.2. Contributions

Our research tackles the aforementioned challenge through the following key contributions:
  • ANN-based BER Estimation: We propose a novel model utilizing artificial neural networks to predict the Bit Error Rate (BER) based on the characteristics of nearby buildings.
  • Incorporation of Environmental Characteristics: Our model integrates building information ( X min , X max , Y min , Y max ) and the physical attributes of the environment, including the antenna parameters of vehicles, as reflected in the NS-3 physical layer.
  • Data Collection: We employ Network Simulator 3 (NS-3) alongside the SUMO mobility simulator to extract real-world geospatial data from the OpenStreetMap geographic database, capturing the 2D geometric structure of buildings for our ANN.
  • Algorithm Assessment: We assess the efficacy of ten distinct learning algorithms for our ANN using five pivotal metrics, highlighting the superior performance of the Conjugate Gradient With Powell/Beale Restarts (CGB) learning algorithm.
  • Enhanced Communication Reliability: Our research augments the reliability of communication in urban VANET contexts, especially for emergency vehicles, by leveraging supervised learning and ANNs to predict signal degradation and optimize routing decisions.
The remainder of this paper is organized as follows: Section 2 provides a comprehensive literature review, focusing on relevant research in the field. Section 3 introduces artificial neural networks, highlighting their significance and application in our study. In Section 4, we detail the step-by-step process employed to develop the BER estimation model. Subsequently, Section 5 presents the obtained results, accompanied by a meticulous analysis. Finally, Section 6 concludes the paper, summarizing the key findings and discussing potential avenues for future research.

2. Related Work

The past decade has seen significant advancements in the realm of vehicular communications, with researchers exploring various facets of signal propagation, channel estimation, and path loss prediction, especially in urban vehicular environments. A prominent study by Ding and Ho presented a digital-twin-enabled deep learning approach, which combined a digital representation of the real-world system with deep learning, outperforming existing methodologies in dynamic urban vehicular environments [11]. Similarly, a deep learning-based channel estimation methodology for Vehicle-to-Everything (V2X) communications has been proposed, addressing the rapid time variations and non-stationary characteristics of wireless channels [12]. Wang and Lu demonstrated the practical constraints of translating physical-layer network coding theories integrated with deep neural networks into real-world applications for vehicular ad-hoc networks [13].
Path loss prediction has been another area of focus. Zhang et al. [14] introduced a comprehensive exploration of machine-learning-driven path loss prediction, highlighting the superior performance of machine-learning models compared to traditional models. Jo et al. [15] further advanced this by synergizing three pivotal techniques, ANN, Gaussian process, and PCA, for feature selection, presenting a holistic machine-learning framework for path loss modeling. Aldossari [16] explored the challenges of signal propagation in indoor environments, especially within the realm of 5G, proposing a data-driven approach leveraging artificial intelligence.
Vehicular Ad-Hoc Networks (VANETs) have also been extensively researched. Liu, St. Amour, and Jaekel focused on the transmission of basic safety messages in VANETs, proposing a reinforcement learning-based congestion control approach [17]. Turan and Coleri emphasized the potential of vehicular visible light communication as a V2X communication method, introducing a machine-learning framework for improved path loss modeling [18]. Moreover, Ramya et al. [19] proposed a non-parametric, data-driven approach using the Random Forest learning method for V2V path loss prediction.
Signal strength prediction is also a critical domain. Igwe et al. [20] introduced a methodology harnessing the power of ANNs to compute received signal strength based on atmospheric parameters. Thrane et al. [21] further explored this by leveraging satellite imagery with deep learning for signal strength prediction. Bahramnejad and Movahhedinia presented an analytical framework for estimating the reliability of V2V communications within cognitive radio-VANETs, integrating a Markov model and an ANN model for streamlined reliability estimation [22].
Although these studies have made significant advancements within their respective domains, our present work distinguishes itself by focusing on signal degradation in urban VANETs. Our approach employs supervised learning anchored in ANNs to estimate signal degradation based on obstacles encountered by moving vehicles.
Table 1 concisely summarizes these studies based on their focus areas, primary methodologies, and applicability, particularly within the urban VANET context:

3. Artificial Neural Networks

Artificial Neural Networks (ANNs) are powerful models for statistical learning, drawing inspiration from the intricate workings of biological neural networks. ANNs have gained significant popularity within machine learning [23]. Several research papers [24,25] in the realm of medicine harness artificial neural networks to address a diverse range of challenges. These networks are composed of interconnected “neurons” that communicate with each other via synapses, mimicking the neural connections found in living organisms. The strength of these connections is adaptively modified based on input and output signals, making ANNs highly suitable for supervised learning tasks.
A typical neural network comprises three essential components: the input layer, the hidden layer(s), and the output layer. This structure can be envisioned as a black box, as illustrated in Figure 1 below, encapsulating the intricate computations performed by the network. The input layer receives the initial data, which is then processed through the interconnected hidden layer(s), ultimately producing an output through the output layer.

4. Supervised Learning for Estimating the Bit Error Rate (BER)

In this section, we present our approach for applying supervised learning to address our research problem. Our approach encompasses seven key steps, outlined as follows:

4.1. Data Collection

The data collection phase is crucial for establishing a comprehensive knowledge base for machine learning. In our study, data are collected through simulations, enabling us to obtain essential information. This includes the geometric structure of the building ( X min , X max , Y min , Y max ), the signal-to-noise ratio, the number of transmitted and received bits, the percentage of received bits, and the Bit Error Rate (BER). These collected data points serve as the foundation for our subsequent machine-learning analyses.

4.1.1. Experimental Environment for Data Collection

The experimental environment used for data collection consists of two simulators: a network simulator and a traffic simulator. To enable comprehensive studies based on the protocols under analysis, we chose NS-3 (Network Simulator 3) [26] as our network simulator. NS-3 offers a wide range of network components that facilitate diverse investigations in the field of wireless network technologies [27]. Moreover, NS-3 supports essential standards such as IEEE 802.11 PHY/MAC [28], 1609/WAVE [29], and 802.11p [30,31], enabling realistic studies on Vehicular Ad Hoc Networks (VANETs). Furthermore, NS-3 provides support for Dedicated Short Range Communication (DSRC) [32], a technology widely employed in North America for real-world vehicle communications testing.
To incorporate a mobility model specific to vehicles, we opted to utilize SUMO (Simulation of Urban MObility) [33] as our traffic simulator. SUMO is a frequently employed tool in studies related to traffic analysis in road networks. Additionally, SUMO offers the capability to interact with external applications through a socket connection. This interaction is facilitated by TraCi (Traffic Control Interface), enabling simulations in a client/server mode. Figure 2 shows a visual representation of our experimental environment.

4.1.2. Experiment Scenarios for Data Gathering

This subsection provides a concise overview of the approach employed to specify urban scenarios using the SUMO simulation framework. Within SUMO, a range of tools was utilized to define the urban components, including obstacles such as buildings. Figure 3 illustrates the process of generating urban scenarios in SUMO.
In our study, it was crucial to accurately represent the geometric structure of the network, considering urban elements such as buildings. To accomplish this, we leveraged the capabilities of the NETCONVERT and POLYCONVERT tools provided by the SUMO traffic simulator.
The NETCONVERT tool facilitated the conversion of files (e.g., OSM) containing information about the geometric structure of road networks from diverse sources into a format compatible with SUMO. Specifically, for our research, we imported the specifications of the geometric structure of the road network from the OpenStreetMap platform [34].
The POLYCONVERT tool was employed to transform the geometric forms (i.e., buildings) obtained from various sources into a visualizable representation within the graphical interface of SUMO.
After defining the urban model, the next step is to develop a script in NS-3 that builds upon this model.

4.1.3. Examples of Urban Model Specification Using SUMO

In this subsection, we present two illustrative examples of urban model specification using SUMO.

Signal Degradation between Two Vehicles Due to Obstacle Presence

The specific scenario can be described as follows: At time instant t 0 , two vehicles, V 1 and V 2 , act as the transmitter and receiver, respectively, and are located on a road segment. At t 0 + Δ t , the transmission between V 1 and V 2 is disrupted due to the presence of an obstacle. This scenario effectively demonstrates the impact of obstacles, such as walls or buildings, on signal degradation. To quantify the extent of signal degradation caused by an obstacle, our study focuses on the transmission between these two vehicles. By limiting the transmission to only two vehicles, we aim to gain a deeper understanding of the obstacle’s impact on the signal degradation phenomenon. Figure 4 below visually depicts the aforementioned scenario using SUMO.

Signal Degradation Between Two Vehicles Caused by Bridge Presence

In this scenario, we consider two vehicles located within transmission range, traversing a two-lane bridge. The communication signal between these vehicles experiences attenuation due to the presence of the bridge. This scenario emphasizes the phenomenon of signal attenuation in an enclosed space. Figure 5 below depicts the described scenario, illustrating the signal attenuation caused by the presence of the bridge.
In Figure 4 and Figure 5, the vehicles V 1 and V 2 are depicted in red and yellow, respectively. The pink illustrations represent the obstacles that obstruct the transmission signal between V 1 and V 2 .

4.1.4. NS-3 Simulation Parameters for Data Collection

In this section, we provide a summary of the simulation parameters used in NS-3 to establish the foundational knowledge for our learning system. Table 2 presents the experiment parameters utilized in the NS-3 simulations.
Furthermore, Table 3 demonstrates a sample of the simulation results, highlighting the obtained data.
Figure 6 illustrates the representation of an obstacle in our urban scenarios.
Where M i n X = X min , M a x X = X max , M i n Y = Y min , and M a x Y = Y max
Table 3 demonstrates how each geometric structure of a building impacts the transmission quality. This information can be interpreted through qualitative variables such as SNR and BER.

4.2. Data Preprocessing

During this stage, we undertake data preprocessing to optimize the usability of input and output parameters for the learning algorithm. Our preprocessing strategy entails removing superfluous details, notably the inclusion of the quantity of received bits and the corresponding percentage of received bits. While these details are related to the predicted variable (Bit Error Rate or BER), they are identified as non-essential for the learning system. By eliminating these extraneous details, we refine the dataset to facilitate more efficient analysis and model training.
Moreover, we standardize our data using the Min–Max normalization technique. This approach enables us to perform a linear conversion on the original data within the range of 0 to +1, employing the following formula:
X norm = X X min X max X min
where:
  • X is the original value;
  • X min is the minimum value in the dataset;
  • X max is the maximum value in the dataset;
  • X norm is the normalized value.

4.3. Choice of Learning Function

In this section, we explore the process of selecting an appropriate learning function for our model. It is important to note that our selection is limited to the ten algorithms available in MATLAB 2016a, as this is the version we are using. To determine the most suitable option, we employ statistical indicators such as Mean Absolute Error (MAE), Mean Squared Error (MSE), Root Mean Squared Error (RMSE), Correlation Coefficient (R), and Maximum Prediction Error (MaxPE) to evaluate the effectiveness of each machine-learning function. A comprehensive assessment of these learning functions, all of which were tested using MATLAB, is provided in Table 4.
  • The Mean Absolute Error (MAE) quantifies the average magnitude of the absolute differences between the observed and forecasted values in the dataset. It provides a measure of the residuals present in the dataset:
    MAE = 1 n i = 1 n | y i y ^ i |
  • The Mean Squared Error (MSE) represents the mean of the squared differences between the actual and predicted values within a dataset. This metric gauges the extent of variance in the residuals:
    MSE = 1 n i = 1 n ( y i y ^ i ) 2
  • The Root Mean Squared Error (RMSE) is the square root of the Mean Squared Error, serving as a metric to gauge the standard deviation of residuals:
    RMSE = MSE = 1 n i = 1 n ( y i y ^ i ) 2
  • The Correlation Coefficient (R) quantifies the relationship between two variables. As the value of the index approaches +1, the stronger the positive association between the variables in question becomes:
    R = i = 1 n ( y i y ¯ ) ( y ^ i y ^ ¯ ) i = 1 n ( y i y ¯ ) 2 i = 1 n ( y ^ i y ^ ¯ ) 2
  • The Maximum Prediction Error (MaxPE) represents the largest discrepancy between the observed and predicted values. It serves as a metric to gauge the maximum error in predictions:
    MaxPE = max i | y i y ^ i |
where:
  • y i is the observed value;
  • y ^ i is the predicted value;
  • y ¯ and y ^ ¯ are the mean of the observed and predicted data points, respectively;
  • n is the number of data points.
Following our assessment, we have opted for the Conjugate Gradient with Powell/Beale Restarts learning function for our model owing to its performance, as evidenced by its recorded lowest MAE, MSE, RMSE, MaxPE, and higher R values. To gain a visual understanding of how the various learning functions perform based on the aforementioned metrics, please consult Figure 7 provided below.

4.4. Data Splitting for Training, Testing, and Validation

To ensure proper evaluation and validation of our model, the gathered data are split into distinct subsets. This step allows for the allocation of data for training, testing, and validation purposes, ensuring robust performance assessment. The following example illustrates the distribution of the data:
  • 70% of the data is allocated for the learning phase;
  • 15% of the data is reserved for validation purposes;
  • Another 15% is dedicated to testing the model’s performance.

4.5. Model Evaluation

The evaluation step plays a crucial role in assessing the model’s capabilities using data that have not been previously utilized for learning. By subjecting the model to this independent dataset, we gain valuable insights into its real-world potential and how it would perform in practical scenarios.

4.6. Tuning Model Parameters

After evaluating the model, we engage in fine-tuning the learning process to enhance its performance. This step involves adjusting specific parameters within the model, such as the number of neurons, learning functions, and synaptic weights, among others. By iteratively refining these parameters, we can optimize the model’s learning process and achieve improved overall performance.

4.7. Estimation of Bit Error Rate (BER)

In this step, we leverage the trained model to estimate the Bit Error Rate (BER) based on new data. Specifically, we feed 300 new samples into the model, which then generates an output corresponding to the estimated BER value.

5. Results and Analysis

In this section, we present the outcomes and provide an in-depth analysis resulting from the evaluation of our developed model.

5.1. Results

The results are presented in tabular format, highlighting the model’s accuracy based on criteria such as MSE, RMSE, MAE, MaxPE, and R.

5.1.1. Parameter Selection for the Developed Neural Network Model

To configure the developed neural network, we carefully determined the parameters for optimal performance. The architecture of the model consists of eighteen hidden layers, as depicted in Figure 8. We divided the dataset into three subsets, allocating 70% for the learning phase, 15% for validation, and the remaining 15% for testing purposes. The following results were obtained using this specific configuration.
Table 5 below highlights the network accuracy with 18 hidden neurons.

5.1.2. Adjusting the Number of Hidden Neurons

To investigate the impact of varying the number of hidden neurons on the network’s performance, we conducted experiments using different configurations. To ensure consistency, the distribution of the entire learning dataset in the subsequent results was kept consistent with the default values (70% for learning, 15% for validation, and 15% for testing), while only the number of hidden neurons was altered. Table 6 below highlights the network accuracy.

5.2. Analysis

The results presented in Figure 9 and Figure 10 illustrate the regression plots for learning with 18 and 45 neurons, respectively. The correlation coefficient (R) is a measure of the relationship between the network outputs and the targets. A correlation coefficient close to 1 indicates good learning, while a value close to 0 indicates weak learning.
Regression plots provide a visual representation of the network outputs compared to the targets across learning, validation, or test datasets. A perfect fit occurs when the data points align along the 45-degree line, indicating that the network outputs perfectly match the targets. In our study, the learning process achieved a correlation coefficient of R = 0.89591 when using 18 hidden neurons and R = 0.99979 when using 45 hidden neurons. These values indicate a strong performance in terms of learning.
Figure 11 and Figure 12 demonstrate the impact of varying the number of hidden neurons on the estimated Bit Error Rate (BER) value. When using 45 hidden neurons, the correlation coefficient indicates good learning performance (R = 0.99979) compared to the network trained with 18 hidden neurons (R = 0.89591). However, the model’s generalization capacity, which refers to its ability to perform well on unseen data, diminishes when using 45 neurons, as depicted in Figure 12. In contrast, the model trained with 18 neurons (Figure 11) exhibits greater precision, with errors ranging between −1 and 1. Conversely, the model trained with 45 neurons loses precision, leading to higher error values. This phenomenon is attributed to overfitting, where an increase in hidden neurons improves the learning process but compromises the accuracy of the model.
Thus, depending on the specific problem being studied, it is crucial to strike a balance between learning performance and model accuracy. In our case, the results indicate that the model based on 18 hidden neurons achieves a better compromise between the two. Consequently, for optimal prediction performance, it is recommended to calibrate the network with 18 hidden neurons.
In conclusion, based on our analysis, using 18 hidden neurons yields superior prediction performance in our case study.

6. Conclusions

In this paper, we presented an empirical study based on neural networks that estimates the BER in relation to obstacles (primarily buildings) encountered by a mobile vehicle in a VANET network. This study is driven by the need for Quality of Service (QoS) in emergency communications within VANETs. By estimating the BER based on the geometric structure of obstacles, vehicles can optimize their data routing decisions more effectively.
For our future work, we plan to integrate the mechanism for BER estimation with the CL-ANTHOCNET routing protocol we previously developed [35]. Subsequently, we will evaluate its performance in scenarios involving the transmission needs of emergency vehicles.

Author Contributions

The structuring, methodology, writing, and implementation of the simulation using tools such as SUMO, NS-3, and Matlab were carried out by B.M. and V.F. The improvement of the writing and the analysis of the results were performed by M.Z. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

The data supporting the findings of this study can be obtained by contacting the corresponding author, Muktar Bappa, at bappamuktar@gmail.com upon a reasonable request.

Conflicts of Interest

The authors declare no conflict of interest.

Abbreviations

The following abbreviations are used in this manuscript:
ANNArtificial Neural Network
BERBit Error Rate
BFGBFGS Quasi-Newton
BRBayesian Regularization
BSPBinary Space Tree
CBRConstant Bit Rate
CGBConjugate Gradient With Powel/Beale Restarts
CGFFletcher–Powell Conjugate
CGPPolak–Ribiére Conjugate Gradient
CL-AntHocNetCross-Layer AntHocNet
DSRCDedicated Short Range Communications
GDXVariable Learning Rate Backpropagation
LMLevenberg–Marquardt
LOSLine of Sight
MACMedia Access Control
MAEMean Absolute Error
MSEMean Squared Error
NS-3Network Simulator 3
OBUOn-Board Unit
OSMOpenStreetMap
OSSOne-Step Secant
PCAPrinciple Component Analysis
PHYPhysical Layer
QoSQuality of Service
RMSERoot Mean Squared Error
RPResilient Backpropagation
RSURoad Side Unit
SCGScaled Conjugate Gradient
SNRSignal-to-Noise Ratio
SUMOSimulation of Urban Mobility
TraCITraffic Control Interface
UDPUser Datagram Protocol
V2IVehicle-to-Infrastructure
V2VVehicle-to-Vehicle
V2XVehicle-to-Everything
VANETVehicular Ad-Hoc Network
WAVEWireless Access In Vehicular Environment

References

  1. Nagel, R.; Eichler, S. Efficient and realistic mobility and channel modeling for VANET scenarios using OMNeT++ and INET-framework. In Proceedings of the 1st International Conference on Simulation Tools and Techniques for Communications, Networks and Systems & Workshops, Marseille, France, 3–7 March 2008; pp. 1–8. [Google Scholar]
  2. Memedi, A.; Tsai, H.M.; Dressler, F. Impact of realistic light radiation pattern on vehicular visible light communication. In Proceedings of the GLOBECOM 2017—2017 IEEE Global Communications Conference, Singapore, 4–8 December 2017; pp. 1–6. [Google Scholar]
  3. Elamassie, M.; Karbalayghareh, M.; Miramirkhani, F.; Kizilirmak, R.C.; Uysal, M. Effect of fog and rain on the performance of vehicular visible light communications. In Proceedings of the 2018 IEEE 87th Vehicular Technology Conference (VTC Spring), Porto, Portugal, 3–6 June 2018; pp. 1–6. [Google Scholar]
  4. Eldeeb, H.B.; Miramirkhani, F.; Uysal, M. A path loss model for vehicle-to-vehicle visible light communications. In Proceedings of the 2019 15th International Conference on Telecommunications (ConTEL), Graz, Austria, 3–5 July 2019; pp. 1–5. [Google Scholar]
  5. Karbalayghareh, M.; Miramirkhani, F.; Eldeeb, H.B.; Kizilirmak, R.C.; Sait, S.M.; Uysal, M. Channel modelling and performance limits of vehicular visible light communication systems. IEEE Trans. Veh. Technol. 2020, 69, 6891–6901. [Google Scholar] [CrossRef]
  6. Sharda, P.; Bhatnagar, M.R. Vehicular Visible Light Communication System: Modeling and Visualizing Critical Outdoor Propagation Characteristics. IEEE Trans. Veh. Technol. 2023. [Google Scholar] [CrossRef]
  7. Ahmed, A.A.; Malebary, S.J.; Ali, W.; Barukab, O.M. Smart traffic shaping based on distributed reinforcement learning for multimedia streaming over 5G-VANET communication technology. Mathematics 2023, 11, 700. [Google Scholar] [CrossRef]
  8. Hota, L.; Nayak, B.P.; Kumar, A.; Sahoo, B.; Ali, G.M.N. A performance analysis of VANETs propagation models and routing protocols. Sustainability 2022, 14, 1379. [Google Scholar] [CrossRef]
  9. Wang, L.; Iida, R.; Wyglinski, A.M. Vehicular network simulation environment via discrete event system modeling. IEEE Access 2019, 7, 87246–87264. [Google Scholar] [CrossRef]
  10. Lübke, M.; Su, Y.; Cherian, A.J.; Fuchs, J.; Dubey, A.; Weigel, R.; Franchi, N. Full physical layer simulation tool to design future 77 GHz JCRS-applications. IEEE Access 2022, 10, 47437–47460. [Google Scholar] [CrossRef]
  11. Ding, C.; Ho, I.W.H. Digital-twin-enabled city-model-aware deep learning for dynamic channel estimation in urban vehicular environments. IEEE Trans. Green Commun. Netw. 2022, 6, 1604–1612. [Google Scholar] [CrossRef]
  12. Pan, J.; Shan, H.; Li, R.; Wu, Y.; Wu, W.; Quek, T.Q. Channel estimation based on deep learning in vehicle-to-everything environments. IEEE Commun. Lett. 2021, 25, 1891–1895. [Google Scholar] [CrossRef]
  13. Wang, X.; Lu, L. Implementation of DNN-based Physical-Layer Network Coding. IEEE Trans. Veh. Technol. 2023, 72, 7380–7394. [Google Scholar] [CrossRef]
  14. Zhang, Y.; Wen, J.; Yang, G.; He, Z.; Wang, J. Path loss prediction based on machine learning: Principle, method, and data expansion. Appl. Sci. 2019, 9, 1908. [Google Scholar] [CrossRef]
  15. Jo, H.S.; Park, C.; Lee, E.; Choi, H.K.; Park, J. Path loss prediction based on machine learning techniques: Principal component analysis, artificial neural network, and Gaussian process. Sensors 2020, 20, 1927. [Google Scholar] [CrossRef] [PubMed]
  16. Aldossari, S.A. Predicting Path Loss of an Indoor Environment Using Artificial Intelligence in the 28-GHz Band. Electronics 2023, 12, 497. [Google Scholar] [CrossRef]
  17. Liu, X.; Amour, B.S.; Jaekel, A. A Reinforcement Learning-Based Congestion Control Approach for V2V Communication in VANET. Appl. Sci. 2023, 13, 3640. [Google Scholar] [CrossRef]
  18. Turan, B.; Coleri, S. Machine learning based channel modeling for vehicular visible light communication. IEEE Trans. Veh. Technol. 2021, 70, 9659–9672. [Google Scholar] [CrossRef]
  19. Ramya, P.M.; Boban, M.; Zhou, C.; Stańczak, S. Using learning methods for v2v path loss prediction. In Proceedings of the 2019 IEEE Wireless Communications and Networking Conference (WCNC), Marrakesh, Morocco, 15–18 April 2019; pp. 1–6. [Google Scholar]
  20. Igwe, K.; Oyedum, O.; Aibinu, A.; Ajewole, M.; Moses, A. Application of artificial neural network modeling techniques to signal strength computation. Heliyon 2021, 7, e06047. [Google Scholar] [CrossRef] [PubMed]
  21. Thrane, J.; Sliwa, B.; Wietfeld, C.; Christiansen, H.L. Deep learning-based signal strength prediction using geographical images and expert knowledge. In Proceedings of the GLOBECOM 2020—2020 IEEE Global Communications Conference, Taipei, Taiwan, 7–11 December 2020; pp. 1–6. [Google Scholar]
  22. Bahramnejad, S.; Movahhedinia, N. A reliability estimation framework for cognitive radio V2V communications and an ANN-based model for automating estimations. Computing 2022, 104, 1923–1947. [Google Scholar] [CrossRef]
  23. Yegnanarayana, B. Artificial Neural Networks; PHI Learning Pvt. Ltd.: New Delhi, India, 2009. [Google Scholar]
  24. Le, N.Q.K.; Ho, Q.T.; Ou, Y.Y. Using two-dimensional convolutional neural networks for identifying GTP binding sites in Rab proteins. J. Bioinform. Comput. Biol. 2019, 17, 1950005. [Google Scholar] [CrossRef]
  25. Kha, Q.H.; Ho, Q.T.; Le, N.Q.K. Identifying SNARE proteins using an alignment-free method based on multiscan convolutional neural network and PSSM profiles. J. Chem. Inf. Model. 2022, 62, 4820–4826. [Google Scholar] [CrossRef]
  26. Henderson, T.R.; Lacage, M.; Riley, G.F.; Dowell, C.; Kopena, J. Network simulations with the ns-3 simulator. SIGCOMM Demonstr. 2008, 14, 527. [Google Scholar]
  27. Arbabi, H.; Weigle, M.C. Highway mobility and vehicular ad-hoc networks in ns-3. In Proceedings of the 2010 Winter Simulation Conference, Baltimore, MD, USA, 5–8 December 2010; pp. 2991–3003. [Google Scholar]
  28. IEEE 802.11-2007; IEEE Standard for Information Technology—Telecommunications and Information Exchange Between Systems—Local and Metropolitan Area Networks—Specific Requirements—Part 11: Wireless LAN Medium Access Control (MAC) and Physical Layer (PHY) Specifications. IEEE: New York, NY, USA, 2017.
  29. IEEE 1609.3-2010; IEEE Standard for Wireless Access in Vehicular Environments (WAVE)-Networking Services. IIEEE: New York, NY, USA, 2010.
  30. IEEE 802.11p-2010; IEEE Standard for Information technology–Local and metropolitan area networks—Specific requirements—Part 11: Wireless LAN Medium Access Control (MAC) and Physical Layer (PHY) Specifications Amendment 6: Wireless Access in Vehicular Environments. IEEE: New York, NY, USA, 2010.
  31. Bu, J.; Tan, G.; Ding, N.; Liu, M.; Son, C. Implementation and evaluation of wave 1609.4/802.11 p in ns-3. In Proceedings of the 2014 Workshop on NS-3, Atlanta, GA, USA, 7 May 2014; pp. 1–8. [Google Scholar]
  32. Kenney, J.B. Dedicated short-range communications (DSRC) standards in the United States. Proc. IEEE 2011, 99, 1162–1182. [Google Scholar] [CrossRef]
  33. Behrisch, M.; Bieker, L.; Erdmann, J.; Krajzewicz, D. SUMO–simulation of urban mobility: An overview. In Proceedings of the SIMUL 2011, The Third International Conference on Advances in System Simulation, ThinkMind, Barcelona, Spain, 23–29 October 2011. [Google Scholar]
  34. Haklay, M.; Weber, P. Openstreetmap: User-generated street maps. IEEE Pervasive Comput. 2008, 7, 12–18. [Google Scholar] [CrossRef]
  35. Benyahia, I.; Bappa, M.; Perrot, J. A Variant of the AnthocNet Routing Protocol: Empirical Study with Application to Communications between Emergency Vehicles. In Proceedings of the ITS World Congress, Montreal, QC, Canada, 29 October–2 November 2017. [Google Scholar]
Figure 1. Components of an artificial neural network.
Figure 1. Components of an artificial neural network.
Electronics 12 03928 g001
Figure 2. Experimental environment.
Figure 2. Experimental environment.
Electronics 12 03928 g002
Figure 3. Specification process of the urban scenario through SUMO.
Figure 3. Specification process of the urban scenario through SUMO.
Electronics 12 03928 g003
Figure 4. Signal degradation due to the presence of an obstacle between two vehicles.
Figure 4. Signal degradation due to the presence of an obstacle between two vehicles.
Electronics 12 03928 g004
Figure 5. Signal attenuation due to the presence of a bridge.
Figure 5. Signal attenuation due to the presence of a bridge.
Electronics 12 03928 g005
Figure 6. Representation of an obstacle in a plan.
Figure 6. Representation of an obstacle in a plan.
Electronics 12 03928 g006
Figure 7. Performance of learning algorithms.
Figure 7. Performance of learning algorithms.
Electronics 12 03928 g007
Figure 8. Architecture of the developed neural network model.
Figure 8. Architecture of the developed neural network model.
Electronics 12 03928 g008
Figure 9. Network regression plot illustrating learning performance with 18 neurons.
Figure 9. Network regression plot illustrating learning performance with 18 neurons.
Electronics 12 03928 g009
Figure 10. Network regression plot illustrating learning performance with 45 neurons.
Figure 10. Network regression plot illustrating learning performance with 45 neurons.
Electronics 12 03928 g010
Figure 11. Bit Error Rate (BER) estimation based on 300 samples utilizing a network architecture with 18 neurons.
Figure 11. Bit Error Rate (BER) estimation based on 300 samples utilizing a network architecture with 18 neurons.
Electronics 12 03928 g011
Figure 12. Bit Error Rate (BER) estimation based on 300 samples utilizing a network architecture with 45 neurons.
Figure 12. Bit Error Rate (BER) estimation based on 300 samples utilizing a network architecture with 45 neurons.
Electronics 12 03928 g012
Table 1. Comparison of research works.
Table 1. Comparison of research works.
ReferenceFocus AreaMethodologyApplicability to Urban VANETs
[11]Channel EstimationDigital Twin with Deep LearningDynamic urban vehicular environments
[12]Channel EstimationLSTM networks and MLPsV2X communications
[13]Physical-Layer Network CodingDNN-PNC ImplementationVehicular Ad-Hoc Networks
[14]Path Loss PredictionMachine-Learning Models5G mobile communication systems
[15]Path Loss PredictionANN, Gaussian Process, PCAWireless sensor networks
[16]Path Loss PredictionAIIndoor environments at 28 GHz
[17]Congestion Control in VANETsReinforcement LearningVehicular ad hoc networks
[18]Channel Modeling for VVLCMachine-Learning FrameworkV2X communication
[19]V2V Path Loss PredictionRandom ForestV2V Communication
[20]Signal Strength PredictionANNsVHF broadcast stations
[21]Signal Strength PredictionDeep Learning with Geographical ImageryMobile networks
[22]Reliability Estimation in CR-VANETsAnalytical Framework, Markov Model, ANN ModelCognitive Radio-VANETs
This workSignal Degradation in Urban VANETsSupervised Learning with ANNsUrban VANETs
Table 2. Experiment parameters in NS-3.
Table 2. Experiment parameters in NS-3.
ParameterValue
Data typeCBR
Transport protocolUDP
Simulation duration300 s
Number of vehicles2
Radio propagationItuR1411LosPropagationLossModel, Two-ray Ground
Number of bits transmitted1500 bytes
Mac protocolIEEE 802.11p
Radio range250
Table 3. Excerpt from the simulation results.
Table 3. Excerpt from the simulation results.
ObservationMinXMinYMaxXMaxYSNRNumber of Bits SentNumber of Bits ReceivedPercentage of Bits Received (in %)Errors Bits Rate (in %)
176283014802772515007955347
216349714358392315008515743
37007261318842015009366238
4178648311518117150010216832
53667290828314150011057426
617182110752310150012188119
774491718809698150012748515
819947892264596150013318911
9100783119813252215008805941
10688113934372115009086139
118988285268301915009646436
1210957416518671815009926634
1373240199747112150011627723
141767258474316150010497030
151049894156647713150011337624
1619285572049582015009366238
17141740910161927150013038713
181532845532156515001359919
19378849882181415001387928
20684784841419150012468317
211815269773282715007394951
2239264122751715150010777228
231357961519763315005693862
2417525410942444015003722575
25199066710136373515005133466
2670740214785650150090694
2716348391692134515002311585
Table 4. Evaluation of learning algorithms on MATLAB using 10 hidden neurons.
Table 4. Evaluation of learning algorithms on MATLAB using 10 hidden neurons.
Number of IterationsLearning AlgorithmsAcronymPrecision Criteria
MEAN MSEMEAN RMSEMEAN MAEMEAN MaxPEMEAN R
115Levenberg–MarquardtLM 3.8 × 10 2 1.9 × 10 1 1.5 × 10 1 7.3 × 10 4 6.5 × 10 2
215Bayesian RegularizationBR 8.3 × 10 2 2.9 × 10 1 2.5 × 10 1 3.2 × 10 4 1.1 × 10 1
315BFGS Quasi-NewtonBFG 4.0 × 10 2 2.0 × 10 1 1.5 × 10 1 9.6 × 10 4 7.1 × 10 2
415Resilient BackpropagationRP 4.0 × 10 2 2.0 × 10 1 1.5 × 10 1 9.6 × 10 4 7.1 × 10 2
515Scaled Conjugate GradientSCG 3.8 × 10 2 1.9 × 10 1 1.5 × 10 1 9.6 × 10 4 7.1 × 10 2
615Conjugate Gradient with Powell/Beale RestartsCGB 3.4 × 10 2 1.8 × 10 1 1.4 × 10 1 2.7 × 10 4 7.4 × 10 1
715Fletcher–Powell Conjugate GradientCGF 4.0 × 10 2 2.0 × 10 1 1.5 × 10 1 3.3 × 10 4 6.8 × 10 1
815Polak–Ribiére Conjugate GradientCGP 3.8 × 10 2 1.9 × 10 1 1.5 × 10 1 2.9 × 10 4 7.2 × 10 1
915One-Step SecantOSS 4.0 × 10 2 1.9 × 10 1 1.5 × 10 1 2.9 × 10 4 6.8 × 10 1
1015Variable Learning Rate BackpropagationGDX 5.3 × 10 2 2.2 × 10 1 1.8 × 10 1 9.6 × 10 4 7.1 × 10 2
Table 5. Network accuracy based on 15 learning sessions and 18 hidden neurons.
Table 5. Network accuracy based on 15 learning sessions and 18 hidden neurons.
Learning SessionAccuracy Criterion
MSERMSEMAEMaxPER
1 4.16 × 10 9 6.45 × 10 5 5.00 × 10 5 2.41 × 10 4 89.6 × 10 1
2 4.16 × 10 9 6.45 × 10 5 5.00 × 10 5 2.41 × 10 4 8.96 × 10 1
3 4.16 × 10 9 6.45 × 10 5 5.00 × 10 5 2.41 × 10 4 8.96 × 10 1
4 4.16 × 10 9 6.45 × 10 5 5.00 × 10 5 2.41 × 10 4 8.96 × 10 1
5 4.16 × 10 9 6.45 × 10 5 5.00 × 10 5 2.41 × 10 4 8.96 × 10 1
6 4.16 × 10 9 6.45 × 10 5 5.00 × 10 5 2.41 × 10 4 8.96 × 10 1
7 4.16 × 10 9 6.45 × 10 5 5.00 × 10 5 2.41 × 10 4 8.96 × 10 1
8 4.16 × 10 9 6.45 × 10 5 5.00 × 10 5 2.41 × 10 4 8.96 × 10 1
9 4.16 × 10 9 6.45 × 10 5 5.00 × 10 5 2.41 × 10 4 8.96 × 10 1
10 4.16 × 10 9 6.45 × 10 5 5.00 × 10 5 2.41 × 10 4 8.96 × 10 1
11 4.16 × 10 9 6.45 × 10 5 5.00 × 10 5 2.41 × 10 4 8.96 × 10 1
12 4.16 × 10 9 6.45 × 10 5 5.00 × 10 5 2.41 × 10 4 8.96 × 10 1
13 4.16 × 10 9 6.45 × 10 5 5.00 × 10 5 2.41 × 10 4 8.96 × 10 1
14 4.16 × 10 9 6.45 × 10 5 5.00 × 10 5 2.41 × 10 4 8.96 × 10 1
15 4.16 × 10 9 6.45 × 10 5 5.00 × 10 5 2.41 × 10 4 8.96 × 10 1
Mean 4.16 × 10 9 6.45 × 10 5 5.00 × 10 5 2.41 × 10 4 8.96 × 10 1
Table 6. Network accuracy for 15 learning sessions with varying numbers of hidden neurons.
Table 6. Network accuracy for 15 learning sessions with varying numbers of hidden neurons.
SessionAccuracy Criterion
MSERMSEMAEMaxPER
1 8.67 × 10 12 2.94 × 10 6 2.34 × 10 6 9.31 × 10 6 9.99 × 10 1
2 8.67 × 10 12 2.94 × 10 6 2.34 × 10 6 9.07 × 10 6 9.99 × 10 1
3 8.67 × 10 12 2.94 × 10 6 2.34 × 10 6 9.07 × 10 6 9.99 × 10 1
4 8.67 × 10 12 2.94 × 10 6 2.34 × 10 6 9.07 × 10 6 9.99 × 10 1
5 8.67 × 10 12 2.94 × 10 6 2.34 × 10 6 9.07 × 10 6 9.99 × 10 1
6 8.67 × 10 12 2.94 × 10 6 2.34 × 10 6 9.07 × 10 6 9.99 × 10 1
7 8.67 × 10 12 2.94 × 10 6 2.34 × 10 6 9.07 × 10 6 9.99 × 10 1
8 8.67 × 10 12 2.94 × 10 6 2.34 × 10 6 9.07 × 10 6 9.99 × 10 1
9 8.67 × 10 12 2.94 × 10 6 2.34 × 10 6 9.07 × 10 6 9.99 × 10 1
10 8.67 × 10 12 2.94 × 10 6 2.34 × 10 6 9.07 × 10 6 9.99 × 10 1
11 8.67 × 10 12 2.94 × 10 6 2.34 × 10 6 9.07 × 10 6 9.99 × 10 1
12 8.67 × 10 12 2.94 × 10 6 2.34 × 10 6 9.07 × 10 6 9.99 × 10 1
13 8.67 × 10 12 2.94 × 10 6 2.34 × 10 6 9.07 × 10 6 9.99 × 10 1
14 8.67 × 10 12 2.94 × 10 6 2.34 × 10 6 9.07 × 10 6 9.99 × 10 1
15 8.67 × 10 12 2.94 × 10 6 2.34 × 10 6 9.07 × 10 6 9.99 × 10 1
Mean 8.67 × 10 12 2.94 × 10 6 2.34 × 10 6 9.09 × 10 6 9.99 × 10 1
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Muktar, B.; Fono, V.; Zongo, M. Predictive Modeling of Signal Degradation in Urban VANETs Using Artificial Neural Networks. Electronics 2023, 12, 3928. https://doi.org/10.3390/electronics12183928

AMA Style

Muktar B, Fono V, Zongo M. Predictive Modeling of Signal Degradation in Urban VANETs Using Artificial Neural Networks. Electronics. 2023; 12(18):3928. https://doi.org/10.3390/electronics12183928

Chicago/Turabian Style

Muktar, Bappa, Vincent Fono, and Meyo Zongo. 2023. "Predictive Modeling of Signal Degradation in Urban VANETs Using Artificial Neural Networks" Electronics 12, no. 18: 3928. https://doi.org/10.3390/electronics12183928

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop