Applications of Machine Learning in Real World

A special issue of Electronics (ISSN 2079-9292). This special issue belongs to the section "Computer Science & Engineering".

Deadline for manuscript submissions: closed (30 November 2023) | Viewed by 16443

Special Issue Editors


E-Mail Website
Guest Editor
Postdoctoral Researcher, Dipartimento di Elettronica e Telecomunicazioni (DET), Politecnico di Torino, 10129 Torino, Italy
Interests: embedded systems; machine learning; deep learning; computer vision; real-time forecasting
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
Assistant Professor, Department of Electrical Engineering, University of Central Punjab, Lahore 54782, Pakistan
Interests: system-on-chip; embedded systems; artificial intelligence; image processing; hardware accelerators

Special Issue Information

Dear Colleagues,

Machine Learning is an ever-growing field with real-time applications reaching out into daily life. Machine Learning has already enjoyed great success in various applications, and its applications are continuously increasing due to the development of new algorithms and methods. From the real-time forecasting of events to Visual analysis tasks, the state-of-the-art algorithms exhibit unmatched performance. Furthermore, with the ongoing traction of embedded computing, the deployment of machine learning algorithms on mobile devices is receiving increasing attention. There are numerous practical applications where hand-held devices having machine learning methods can be more useful due to their compact size and integrated resources.

We are pleased to invite you to contribute with your valuable research work to this Special Issue, “Applications of Machine Learning in Real World”, which aims to further advance the knowledge of machine learning engineers and researchers by providing a critical update on recent applications of machine learning to solve real-world problems.

In this Special Issue, original research articles and reviews are welcome. Research areas may include (but are not limited to) the following:

  • Applications of Machine Learning in Different Domains (Natural Language Processing, Healthcare, Visual Analysis, Telecommunication, Power, etc.);
  • Real-time Forecasting using Machine Learning;
  • Machine Learning Algorithms for Embedded Systems.

We look forward to receiving your contributions.

Dr. Syed Tahir Hussain Rizvi
Dr. Arslan Arif
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Electronics is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2400 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • machine learning applications
  • time series forecasting
  • visual analysis
  • machine learning for healthcare
  • algorithms on embedded platforms

Published Papers (11 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Jump to: Review

20 pages, 5277 KiB  
Article
Sentiment Analysis in Portuguese Restaurant Reviews: Application of Transformer Models in Edge Computing
by Alexandre Branco, Daniel Parada, Marcos Silva, Fábio Mendonça, Sheikh Shanawaz Mostafa and Fernando Morgado-Dias
Electronics 2024, 13(3), 589; https://doi.org/10.3390/electronics13030589 - 31 Jan 2024
Viewed by 590
Abstract
This study focuses on improving sentiment analysis in restaurant reviews by leveraging transfer learning and transformer-based pre-trained models. This work evaluates the suitability of pre-trained deep learning models for analyzing Natural Language Processing tasks in Portuguese. It also explores the viability of utilizing [...] Read more.
This study focuses on improving sentiment analysis in restaurant reviews by leveraging transfer learning and transformer-based pre-trained models. This work evaluates the suitability of pre-trained deep learning models for analyzing Natural Language Processing tasks in Portuguese. It also explores the viability of utilizing edge devices for Natural Language Processing tasks, considering their computational limitations and resource constraints. Specifically, we employ bidirectional encoder representations from transformers and robustly optimized BERT approach, two state-of-the-art models, to build a sentiment review classifier. The classifier’s performance is evaluated using accuracy and area under the receiver operating characteristic curve as the primary metrics. Our results demonstrate that the classifier developed using ensemble techniques outperforms the baseline model (from 0.80 to 0.84) in accurately classifying restaurant review sentiments when three classes are considered (negative, neutral, and positive), reaching an accuracy and area under the receiver operating characteristic curve higher than 0.8 when examining a Zomato restaurant review dataset, provided for this work. This study seeks to create a model for the precise classification of Portuguese reviews into positive, negative, or neutral categories. The flexibility of deploying our model on affordable hardware platforms suggests its potential to enable real-time solutions. The deployment of the model on edge computing platforms improves accessibility in resource-constrained environments. Full article
(This article belongs to the Special Issue Applications of Machine Learning in Real World)
Show Figures

Figure 1

17 pages, 1227 KiB  
Article
Training Data Augmentation with Data Distilled by Principal Component Analysis
by Nikolay Metodiev Sirakov, Tahsin Shahnewaz and Arie Nakhmani
Electronics 2024, 13(2), 282; https://doi.org/10.3390/electronics13020282 - 08 Jan 2024
Viewed by 1477
Abstract
This work develops a new method for vector data augmentation. The proposed method applies principal component analysis (PCA), determines the eigenvectors of a set of training vectors for a machine learning (ML) method and uses them to generate the distilled vectors. The training [...] Read more.
This work develops a new method for vector data augmentation. The proposed method applies principal component analysis (PCA), determines the eigenvectors of a set of training vectors for a machine learning (ML) method and uses them to generate the distilled vectors. The training and PCA-distilled vectors have the same dimension. The user chooses the number of vectors to be distilled and augmented to the set of training vectors. A statistical approach determines the lowest number of vectors to be distilled such that when augmented to the original vectors, the extended set trains an ML classifier to achieve a required accuracy. Hence, the novelty of this study is the distillation of vectors with the PCA method and their use to augment the original set of vectors. The advantage that comes from the novelty is that it increases the statistics of ML classifiers. To validate the advantage, we conducted experiments with four public databases and applied four classifiers: a neural network, logistic regression and support vector machine with linear and polynomial kernels. For the purpose of augmentation, we conducted several distillations, including nested distillation (double distillation). The latter notion means that new vectors were distilled from already distilled vectors. We trained the classifiers with three sets of vectors: the original vectors, original vectors augmented with vectors distilled by PCA and original vectors augmented with distilled PCA vectors and double distilled by PCA vectors. The experimental results are presented in the paper, and they confirm the advantage of the PCA-distilled vectors increasing the classification statistics of ML methods if the distilled vectors augment the original training vectors. Full article
(This article belongs to the Special Issue Applications of Machine Learning in Real World)
Show Figures

Figure 1

14 pages, 1825 KiB  
Article
Fault Diagnosis of Oil-Immersed Transformers Based on the Improved Neighborhood Rough Set and Deep Belief Network
by Xiaoyang Miao, Hongda Quan, Xiawei Cheng, Mingming Xu, Qingjiang Huang, Cong Liang and Juntao Li
Electronics 2024, 13(1), 5; https://doi.org/10.3390/electronics13010005 - 19 Dec 2023
Cited by 2 | Viewed by 702
Abstract
As one of the essential components in power systems, transformers play a pivotal role in the transmission and distribution of renewable energy generation. Accurate diagnosis of transformer fault types is crucial for maintaining the safety of power systems. The current focus in research [...] Read more.
As one of the essential components in power systems, transformers play a pivotal role in the transmission and distribution of renewable energy generation. Accurate diagnosis of transformer fault types is crucial for maintaining the safety of power systems. The current focus in research lies in transformer fault diagnosis methods based on Dissolved Gas Analysis (DGA). Traditional diagnostic methods directly utilize the five fault gases from DGA data as model input features, but this approach does not comprehensively reflect all potential fault types in transformers. In this paper, a non-coding ratio method was employed to generate 35 fault gas ratios based on the five fault gases, subsequently refined through correlation analysis to eliminate redundant feature variables, resulting in 15 significantly representative fault gas ratios. To further streamline the feature variables and remove non-contributing elements to fault diagnosis, an improved Neighborhood Rough Set (INRS) algorithm was introduced, leveraging symmetrical uncertainty measurement. By resorting to the proposed INRS, eight most representative fault gas ratios were selected as input variables for constructing a Deep Belief Network (DBN) diagnostic model. Experimental results on Dissolved Gas Analysis (DGA) data confirmed the effectiveness and accuracy of the proposed method. Full article
(This article belongs to the Special Issue Applications of Machine Learning in Real World)
Show Figures

Figure 1

20 pages, 2841 KiB  
Article
Simultaneous Pipe Leak Detection and Localization Using Attention-Based Deep Learning Autoencoder
by Divas Karimanzira
Electronics 2023, 12(22), 4665; https://doi.org/10.3390/electronics12224665 - 16 Nov 2023
Viewed by 967
Abstract
Water distribution networks are often susceptible to pipeline leaks caused by mechanical damages, natural hazards, corrosion, and other factors. This paper focuses on the detection of leaks in water distribution networks (WDN) using a data-driven approach based on machine learning. A hybrid autoencoder [...] Read more.
Water distribution networks are often susceptible to pipeline leaks caused by mechanical damages, natural hazards, corrosion, and other factors. This paper focuses on the detection of leaks in water distribution networks (WDN) using a data-driven approach based on machine learning. A hybrid autoencoder neural network (AE) is developed, which utilizes unsupervised learning to address the issue of unbalanced data (as anomalies are rare events). The AE consists of a 3DCNN encoder, a ConvLSTM decoder, and a ConvLSTM future predictor, making the anomaly detection robust. Additionally, spatial and temporal attention mechanisms are employed to enhance leak localization. The AE first learns the expected behavior and subsequently detects leaks by identifying deviations from this expected behavior. To evaluate the performance of the proposed method, the Water Network Tool for Resilience (WNTR) simulator is utilized to generate water pressure and flow rate data in a water supply network. Various conditions, such as fluctuating water demands, data noise, and the presence of leaks, are considered using the pressure-driven demand (PDD) method. Datasets with and without pipe leaks are obtained, where the AE is trained using the dataset without leaks and tested using the dataset with simulated pipe leaks. The results, based on a benchmark WDN and a confusion matrix analysis, demonstrate that the proposed method successfully identifies leaks in 96% of cases and a false positive rate of 4% compared to two baselines: a multichannel CNN encoder with LSTM decoder (MC-CNN-LSTM) and a random forest and model based on supervised learning with a false positive rate of 8% and 15%, respectively. Furthermore, a real case study demonstrates the applicability of the developed model for leak detection in the operational conditions of water supply networks using inline sensor data. Full article
(This article belongs to the Special Issue Applications of Machine Learning in Real World)
Show Figures

Figure 1

11 pages, 568 KiB  
Article
Deep Learning-Enabled Improved Direction-of-Arrival Estimation Technique
by George Jenkinson, Muhammad Ali Babar Abbasi, Amir Masoud Molaei, Okan Yurduseven and Vincent Fusco
Electronics 2023, 12(16), 3505; https://doi.org/10.3390/electronics12163505 - 18 Aug 2023
Cited by 1 | Viewed by 1438
Abstract
This paper provides a simple yet effective approach to improve direction-of-arrival (DOA) estimation performance in extreme signal-to-noise-ratio (SNR) conditions. As an example, a multiple signal classification (MUSIC) algorithm with a deep learning (DL) approach is used. First, brief research into the existing DOA [...] Read more.
This paper provides a simple yet effective approach to improve direction-of-arrival (DOA) estimation performance in extreme signal-to-noise-ratio (SNR) conditions. As an example, a multiple signal classification (MUSIC) algorithm with a deep learning (DL) approach is used. First, brief research into the existing DOA estimation techniques is provided, followed by a demonstration of a simulation environment created on the MATLAB platform to generate and resolve signals from a uniform rectangular array of antenna elements. Following that is an attempt to improve the estimation accuracy of these signals by training various DL approaches, including multi-layer perceptron and one- and two-dimensional convolutional neural networks, using the generated dataset. Key findings include the cases where the developed DL approach can resolve signals and provide accurate DOA estimations that the MUSIC algorithm cannot. Full article
(This article belongs to the Special Issue Applications of Machine Learning in Real World)
Show Figures

Figure 1

17 pages, 871 KiB  
Article
Simulation-Based Headway Optimization for the Bangkok Airport Railway System under Uncertainty
by Pruk Sasithong, Amir Parnianifard, Nitinun Sinpan, Suvit Poomrittigul, Muhammad Saadi and Lunchakorn Wuttisittikulkij
Electronics 2023, 12(16), 3493; https://doi.org/10.3390/electronics12163493 - 17 Aug 2023
Cited by 1 | Viewed by 745
Abstract
The ever-increasing demand for intercity travel, as well as competition among all modes of transportation, is an unavoidable reality that today’s urban rail transit system must deal with. To meet this problem, urban railway companies must try to make better use of their [...] Read more.
The ever-increasing demand for intercity travel, as well as competition among all modes of transportation, is an unavoidable reality that today’s urban rail transit system must deal with. To meet this problem, urban railway companies must try to make better use of their existing plans and resources. Analytical approaches or simulation modeling can be used to develop or change a rail schedule to reflect the appropriate passenger demand. However, in the case of complex railway networks with several interlocking zones, analytical methods frequently have drawbacks. The goal of this article is to create a new simulation-based optimization model for the Bangkok railway system that takes into account the real assumptions and requirements in the railway system, such as uncertainty. The common particle swarm optimization (PSO) technique is combined with the developed simulation model to optimize the headways for each period in each day. Two different objective functions are incorporated into the models to consider both customer satisfaction by reducing the average waiting time and railway management satisfaction by reducing needed energy usage (e.g., reducing operating trains). The results obtained using a real dataset from the Bangkok railway system demonstrate that the simulation-based optimization approach for robust train service timetable scheduling, which incorporates both passenger waiting times and the number of operating trains as equally important objectives, successfully achieved an average waiting time of 11.02 min (with a standard deviation of 1.65 min) across all time intervals. Full article
(This article belongs to the Special Issue Applications of Machine Learning in Real World)
Show Figures

Figure 1

18 pages, 3684 KiB  
Article
Research on Non-Intrusive Load Recognition Method Based on Improved Equilibrium Optimizer and SVM Model
by Jingqin Wang, Bingpeng Zhang and Liang Shu
Electronics 2023, 12(14), 3138; https://doi.org/10.3390/electronics12143138 - 19 Jul 2023
Cited by 2 | Viewed by 725
Abstract
Non-intrusive load monitoring is the main trend of green energy-saving electricity consumption at present, and load identification is a core part of non-invasive load monitoring. A support vector machine (SVM) is commonly used in load recognition, but there are still some problems in [...] Read more.
Non-intrusive load monitoring is the main trend of green energy-saving electricity consumption at present, and load identification is a core part of non-invasive load monitoring. A support vector machine (SVM) is commonly used in load recognition, but there are still some problems in the parameter selection, resulting in a low recognition accuracy. Therefore, an improved equilibrium optimizer (IEO) is proposed to optimize the parameters of the SVM. Firstly, household appliance data are collected, and load features are extracted to build a self-test dataset; and secondly, Bernoulli chaotic mapping, adaptive factors and the Levy flight were introduced to improve the traditional equilibrium optimizer algorithm. The performance of the IEO algorithm is validated on test functions, and the SVM is optimized using the IEO algorithm to establish the IEO-SVM load identification model. Finally, the recognition effect of the IEO-SVM model is verified based on the self-test dataset and the public dataset. The results show that the IEO algorithm has good optimization accuracy and convergence speed on the test function. The IEO-SVM load recognition model achieves an accuracy of 99.428% on the self-test dataset and 100% accuracy on the public dataset, and the classification performance is significantly better than other classification algorithms, which can complete the load recognition task well. Full article
(This article belongs to the Special Issue Applications of Machine Learning in Real World)
Show Figures

Figure 1

23 pages, 5286 KiB  
Article
Interval Type 2 Fuzzy Adaptive Motion Drive Algorithm Design
by Syed M. Ali, Yanling Guo, Syed Tahir Hussain Rizvi, Roohul Amin and Awais Yasin
Electronics 2023, 12(13), 2946; https://doi.org/10.3390/electronics12132946 - 04 Jul 2023
Viewed by 666
Abstract
Motion drive algorithms are a set of filters designed to simulate realistic motion and are an integral part of contemporary vehicle simulators. This paper presents the design of a novel intelligent interval type 2 fuzzy adaptive motion drive algorithm for an off-road uphill [...] Read more.
Motion drive algorithms are a set of filters designed to simulate realistic motion and are an integral part of contemporary vehicle simulators. This paper presents the design of a novel intelligent interval type 2 fuzzy adaptive motion drive algorithm for an off-road uphill vehicle simulator. The off-road, uphill vehicle simulator is used to train and assess the driver’s behavior under varying operational and environmental conditions in mountainous terrain. The proposed algorithm is the first of its kind to be proposed for off-road uphill vehicle simulators, and it offers numerous benefits over other motion drive algorithms. The proposed algorithm enables the simulator to adapt to changes in the uphill road surface, vehicle weight distribution, and other factors that influence off-road driving in mountainous terrain. The proposed algorithm simulates driving on hilly terrain more realistically than existing algorithms, allowing drivers to learn and practice in a safe and controlled environment. Additionally, the proposed algorithm overcomes limitations present in existing algorithms. The performance of the proposed algorithm is evaluated via test drives and compared to the performance of the conventional motion drive algorithm. The results demonstrate that the proposed algorithm is more effective than the conventional motion drive algorithm for the ground vehicle simulator. The pitch and roll responses demonstrate that the proposed algorithm has enabled the driver to experience abrupt changes in terrain while maintaining the driver’s safety. The surge response demonstrated that the proposed MDA handled the acceleration and deceleration of the vehicle very effectively. In addition, the results demonstrated that the proposed algorithm resulted in a smoother drive, prevented false motion cues, and offered a more immersive and realistic driving experience. Full article
(This article belongs to the Special Issue Applications of Machine Learning in Real World)
Show Figures

Figure 1

Review

Jump to: Research

30 pages, 1994 KiB  
Review
The Challenges of Machine Learning: A Critical Review
by Enrico Barbierato and Alice Gatti
Electronics 2024, 13(2), 416; https://doi.org/10.3390/electronics13020416 - 19 Jan 2024
Cited by 1 | Viewed by 2104
Abstract
The concept of learning has multiple interpretations, ranging from acquiring knowledge or skills to constructing meaning and social development. Machine Learning (ML) is considered a branch of Artificial Intelligence (AI) and develops algorithms that can learn from data and generalize their judgment to [...] Read more.
The concept of learning has multiple interpretations, ranging from acquiring knowledge or skills to constructing meaning and social development. Machine Learning (ML) is considered a branch of Artificial Intelligence (AI) and develops algorithms that can learn from data and generalize their judgment to new observations by exploiting primarily statistical methods. The new millennium has seen the proliferation of Artificial Neural Networks (ANNs), a formalism able to reach extraordinary achievements in complex problems such as computer vision and natural language recognition. In particular, designers claim that this formalism has a strong resemblance to the way the biological neurons operate. This work argues that although ML has a mathematical/statistical foundation, it cannot be strictly regarded as a science, at least from a methodological perspective. The main reason is that ML algorithms have notable prediction power although they cannot necessarily provide a causal explanation about the achieved predictions. For example, an ANN could be trained on a large dataset of consumer financial information to predict creditworthiness. The model takes into account various factors like income, credit history, debt, spending patterns, and more. It then outputs a credit score or a decision on credit approval. However, the complex and multi-layered nature of the neural network makes it almost impossible to understand which specific factors or combinations of factors the model is using to arrive at its decision. This lack of transparency can be problematic, especially if the model denies credit and the applicant wants to know the specific reasons for the denial. The model’s “black box” nature means it cannot provide a clear explanation or breakdown of how it weighed the various factors in its decision-making process. Secondly, this work rejects the belief that a machine can simply learn from data, either in supervised or unsupervised mode, just by applying statistical methods. The process of learning is much more complex, as it requires the full comprehension of a learned ability or skill. In this sense, further ML advancements, such as reinforcement learning and imitation learning denote encouraging similarities to similar cognitive skills used in human learning. Full article
(This article belongs to the Special Issue Applications of Machine Learning in Real World)
Show Figures

Figure 1

25 pages, 881 KiB  
Review
Efficient Secure Routing Mechanisms for the Low-Powered IoT Network: A Literature Review
by Muhammad Zunnurain Hussain and Zurina Mohd Hanapi
Electronics 2023, 12(3), 482; https://doi.org/10.3390/electronics12030482 - 17 Jan 2023
Cited by 10 | Viewed by 3008
Abstract
The Wireless Sensor Network in the Internet of Things (WSN-IoT) has been flourishing as another global breakthrough over the past few years. The WSN-IoT is reforming the way we live today by spreading through all areas of life, including the dangerous demographic aging [...] Read more.
The Wireless Sensor Network in the Internet of Things (WSN-IoT) has been flourishing as another global breakthrough over the past few years. The WSN-IoT is reforming the way we live today by spreading through all areas of life, including the dangerous demographic aging crisis and the subsequent decline of jobs. For a company to increase revenues and cost-effectiveness growth should be customer-centered and agile within an organization. WSN-IoT networks have simultaneously faced threats, such as sniffing, spoofing, and intruders. However, WSN-IoT networks are often made up of multiple embedded devices (sensors and actuators) with limited resources that are joined via various connections in a low-power and lossy manner. However, to our knowledge, no research has yet been conducted into the security methods. Recently, a Contiki operating system’s partial implementation of Routing Protocol for Low Power & Lossy Network RPL’s security mechanisms was published, allowing us to evaluate RPL’s security methods. This paper presents a critical analysis of security issues in the WSN-IoT and applications of WSN-IoT, along with network management details using machine learning. The paper gives insights into the Internet of Things in Low Power Networks (IoT-LPN) architecture, research challenges of the Internet of Things in Low Power Networks, network attacks in WSN-IoT infrastructures, and the significant WSN-IoT objectives that need to be accompanied by current WSN-IoT frameworks. Several applied WSN-IoT security mechanisms and recent contributions have been considered, and their boundaries have been stated to be a significant research area in the future. Moreover, various low-powered IoT protocols have been further discussed and evaluated, along with their limitations. Finally, a comparative analysis is performed to assess the proposed work’s performance. The study shows that the proposed work covers a wide range of factors, whereas the rest of the research in the literature is limited. Full article
(This article belongs to the Special Issue Applications of Machine Learning in Real World)
Show Figures

Figure 1

23 pages, 2468 KiB  
Review
Efficient and Secured Mechanisms for Data Link in IoT WSNs: A Literature Review
by Muhammad Zulkifl Hasan and Zurina Mohd Hanapi
Electronics 2023, 12(2), 458; https://doi.org/10.3390/electronics12020458 - 16 Jan 2023
Cited by 7 | Viewed by 2738
Abstract
The Internet of things (IoT) and wireless sensor networks (WSNs) have been rapidly and tremendously developing recently as computing technologies have brought about a significant revolution. Their applications and implementations can be found all around us, either individually or collaboratively. WSN plays a [...] Read more.
The Internet of things (IoT) and wireless sensor networks (WSNs) have been rapidly and tremendously developing recently as computing technologies have brought about a significant revolution. Their applications and implementations can be found all around us, either individually or collaboratively. WSN plays a leading role in developing the general flexibility of industrial resources in terms of increasing productivity in the IoT. The critical principle of the IoT is to make existing businesses sufficiently intelligent to recognize the need for significant fault mitigation and short-cycle adaptation to improve effectiveness and financial profits. This article presents efficiently applied security protocols at the data link layer for WSN and IoT-based frameworks. It outlines the importance of WSN–IoT applications as well as the architecture of WSN in the IoT. Our primary aim is to highlight the research issues and limitations of WSNs related to the IoT. The fundamental goal of this work is to emphasize a suggested architecture linked to WSN–IoT to enhance energy and power consumption, mobility, information transmission, QoS, and security, as well as to present practical solutions to data link layer difficulties for the future using machine learning. Moreover, we present data link layer protocol issues, attacks, limitations, and research gaps for WSN frameworks based on the recent work conducted on the data link layer concerning WSN applications. Current significant issues and challenges pertain to flow control, quality of service (QoS), security, and performance. In the context of the literature, less work has been undertaken concerning the data link layer in WSN and its relation to improved network performance. Full article
(This article belongs to the Special Issue Applications of Machine Learning in Real World)
Show Figures

Figure 1

Back to TopTop