sensors-logo

Journal Browser

Journal Browser

Emerging Sensor Communication Network-Based AI/ML Driven Intelligent IoT

A special issue of Sensors (ISSN 1424-8220). This special issue belongs to the section "Internet of Things".

Deadline for manuscript submissions: closed (30 April 2023) | Viewed by 72635

Special Issue Editors

Department of Computer Science and Engineering, Chitkara University, Punjab, India
Interests: wireless communication; wireless sensor networks; wireless mesh networks; next generation networking; network security; Internet of Things; UAV; medical image processing and edge/fog computing
Special Issues, Collections and Topics in MDPI journals
Department of Computer Science, University of Petroleum & Energy Studies, Dehradun 248007, India
Interests: medical image processing; pattern recognition; computer vision; deep learning
Computer Engineering Department, College of Computer Science and Engineering, Hail University, Hail 81481, Saudi Arabia
Interests: Internet of Things (IoT); computational intelligence; security; brain-computer interface (BCI); big data; artificial intelligence; optimization; deep learning
BISITE Research Group, Edificio Multiusos I+D+I, University of Salamanca, 37007 Salamanca, Spain
Interests: artificial Intelligence; machine learning; edge computing; distributed computing; Blockchain; consensus model; smart cities; smart grid
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

Recently, the field of the Internet of Things (IoT) is one of the fastest growing areas in terms of Artificial Intelligence (AI) and Machine Learning (ML) techniques. In the future, the IoT market will expand to 24 billion devices globally around 2030. With these new developments in AI and ML, approaches with the support of the IoT have empowered various real-life applications such as industry, e-healthcare, smart cities, smart utilities, smart transportation, and smart homes. AI and ML have both been the most striking topics as these technologies progressively determine their path from everything to everywhere, that is, from pioneering healthcare systems and novel quantum computing to “elegant” peculiar assistants and consumer electronics. Therefore, there is a need to design scalable and resource-efficient sensor-based systems that can perform well in different types of wireless systems with the increase of IoT devices in heterogeneous applications. ML techniques are advantageous in communication systems whereas, deep learning techniques are widely utilized in Big Data analysis for prediction and performance improvement. The utilization of AI and ML is increasingly intertwined with IoT. AI, ML and deep learning are now being utilized for making IoT services and devices smarter and more secure .

In the modern context, one IT megatrend has been identified as Hyperautomation, which is based on AI, ML, and robotic process automation. The COVID-19 pandemic has given birth to this concept in which anything within the organization can be automated, a concept known as digital/intelligent business process automation. These automated processes can adapt to any fluctuating situations and respond to unanticipated circumstances. Some other trends include the provision of security and connectivity to different types of IoT devices. Therefore, there is a need to develop automated, efficient, and scalable strategies that can identify, classify, apply and monitor policies for ensuring appropriate functionality without affecting other services on the network. Moreover, for unlocking the potential of AI in businesses, AI processing and its data have been placed from the cloud, to the edge, to the fog of the network. Further, there is a requirement for a network that can provide dynamic performance, low latency communications, and end-to-end bandwidth. Intent-based networking is a new type of network that leverages the new capabilities for meeting business goals. In this, network uses NLP for communication from any business line and then performs translation into the set of policies to help in taking automatic decisions. Therefore, many application drawbacks, such as unplanned downtime, can be avoided by predicting equipment failure by using data analytics for scheduling the systematic continuation processes. Otherwise, Predictive Maintenance can aid in mitigating the destructive economical cost of unexpected interruption.

Operative efficiency can be increased by predicting working environments and identifying the factors that need to be adapted on the fly for the maintenance of ideal outcomes that can improve operational efficiency. It helps in services with the help of NLP for speaking with machinery, fleet management, AI-enabled robots, and drones. With the integration of AI and IoT, risk management can be enhanced by predicting various types of risks in advance to mechanize a quick reply. AI has been a standard accompaniment to IoT operations, helping to improve operations and offering an economical edge in business performance.

Developments in the IoT are playing a significant role in our daily lives. In IoT, a huge number of devices such as actuators and sensors are deployed and connected for the collection of different types of data such as healthcare, transportation, public safety, energy, manufacturing, and smart city infrastructure espousing systems. At the same time, ML/DL has shown substantial success in the transformation of complex and massive datasets into precise comprehension as output, which can significantly facilitate intelligence, analysis, automation, and decision-making. ML has provided a means of performing giant modeling and intelligence with the integration of developments in big-data analytics, big-networking technologies, and big-data computing, to achieve enormous accomplishments in diverse areas. Despite these achievements, the leveraging of machine learning in IoT faces significant challenges to achieving an AI-enabled Internet of controllable and dependable things, and we must take into account the outstanding necessities for latency, connectivity, accessibility, scalability, resiliency, and security. The unified fusion of ML into IoT, consequently, produces prospects for necessitating interdisciplinary endeavors and novel research in order to provide a solution to various challenges.

This Special Issue focuses on the results of the research presented in the above-mentioned domains. Contributions are invited in the field of AI/ML models for IoT devices and deployed networks. Moreover, research on big-data analytics and approaches and decision-making are also invited, along with new practices and concepts with AI/ML automated systems. Authors are invited from both academia and industry to work on the application of AI/ML techniques to computer systems for the submission of their original articles with designing, optimizing, and implementation of protocols, models, and optimization methods.

Topics: This Special Issue includes the following topics of interest:

  • Machine learning for theoretical foundation and models for IoT;
  • Machine learning for IoT system deployment and operation;
  • Machine learning for IoT assisted industrial automation;
  • Machine learning-enabled real-time IoT data analytics;
  • Machine learning-enabled sensing and decision-making for IoT;
  • Machine learning-enabled cloud/edge computing systems for IoT;
  • Evaluation platforms and hardware-in-the-loop testbeds for machine learning-enabled IoT;
  • Machine learning-assisted intrusion and malware detection for IoT;
  • Machine learning for access congestion management in edge computing IoT networks;
  • AI/DL-based IoT-cloud convergent algorithms/applications for healthcare;
  • ML-driven long-term risk of pandemics prediction;
  • AI/DL-empowered data fusion for healthcare;
  • Sensor based human-centric AI for IoT systems;
  • Explainable AI (XAI) and predictive data analytics for healthcare;
  • DL-techniques for handling post COVID-19 crisis;
  • IoT-cloud healthcare big data storage, processing, analysis using ML/DL techniques;
  • Adversarial attacks, threats, and defenses for DL-enabled healthcare;
  • Protocols and algorithms for intelligent IoT systems;
  • 5G/6G technology-enabled AIoT.

Prof. Bhisham Sharma
Dr. Deepika Koundal
Dr. Rabie A. Ramadan
Prof. Dr. Juan M. Corchado
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Sensors is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Published Papers (18 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Editorial

Jump to: Research, Review

5 pages, 186 KiB  
Editorial
Emerging Sensor Communication Network-Based AI/ML Driven Intelligent IoT
Sensors 2023, 23(18), 7814; https://doi.org/10.3390/s23187814 - 12 Sep 2023
Viewed by 634
Abstract
At present, the field of the Internet of Things (IoT) is one of the fastest-growing areas in terms of Artificial Intelligence (AI) and Machine Learning (ML) techniques [...] Full article

Research

Jump to: Editorial, Review

20 pages, 104761 KiB  
Article
Development of a Smart Signalization for Emergency Vehicles
Sensors 2023, 23(10), 4703; https://doi.org/10.3390/s23104703 - 12 May 2023
Cited by 3 | Viewed by 3877
Abstract
As the population increases, the number of motorized vehicles on the roads also increases. As the number of vehicles increases, traffic congestion occurs. Traffic lights are used at road junctions, intersections, pedestrian crossings, and other places where traffic needs to be controlled to [...] Read more.
As the population increases, the number of motorized vehicles on the roads also increases. As the number of vehicles increases, traffic congestion occurs. Traffic lights are used at road junctions, intersections, pedestrian crossings, and other places where traffic needs to be controlled to avoid traffic chaos. Due to traffic lights installed in the city, queues of vehicles are formed on the streets for most of the day, and many problems arise because of this. One of the most important problems is that emergency vehicles, such as ambulances, fire engines, police cars, etc., cannot arrive on time despite traffic priorities. Emergency vehicles such as hospitals and police departments need to reach the scene in a very short time. Time loss is a problem that needs to be addressed, especially for emergency vehicles traveling in traffic. In this study, ambulances, fire brigades, police, etc., respond to emergencies. A solution and a related application have been developed so privileged vehicles can reach their target destination as soon as possible. In this study, a route is determined between the current location of an emergency vehicle and its target location in an emergency. Communication between traffic lights is provided with a mobile application developed specifically for the vehicle driver. In this process, the person controlling the lights can turn on the traffic lights during the passage of vehicles. After the vehicles with priority to pass passed, traffic signaling was normalized via the mobile application. This process was repeated until the vehicle reached its destination. Full article
Show Figures

Figure 1

26 pages, 5534 KiB  
Article
Cyber Attack Detection for Self-Driving Vehicle Networks Using Deep Autoencoder Algorithms
Sensors 2023, 23(8), 4086; https://doi.org/10.3390/s23084086 - 18 Apr 2023
Cited by 7 | Viewed by 3801
Abstract
Connected and autonomous vehicles (CAVs) present exciting opportunities for the improvement of both the mobility of people and the efficiency of transportation systems. The small computers in autonomous vehicles (CAVs) are referred to as electronic control units (ECUs) and are often perceived as [...] Read more.
Connected and autonomous vehicles (CAVs) present exciting opportunities for the improvement of both the mobility of people and the efficiency of transportation systems. The small computers in autonomous vehicles (CAVs) are referred to as electronic control units (ECUs) and are often perceived as being a component of a broader cyber–physical system. Subsystems of ECUs are often networked together via a variety of in-vehicle networks (IVNs) so that data may be exchanged, and the vehicle can operate more efficiently. The purpose of this work is to explore the use of machine learning and deep learning methods in defence against cyber threats to autonomous cars. Our primary emphasis is on identifying erroneous information implanted in the data buses of various automobiles. In order to categorise this type of erroneous data, the gradient boosting method is used, providing a productive illustration of machine learning. To examine the performance of the proposed model, two real datasets, namely the Car-Hacking and UNSE-NB15 datasets, were used. Real automated vehicle network datasets were used in the verification process of the proposed security solution. These datasets included spoofing, flooding and replay attacks, as well as benign packets. The categorical data were transformed into numerical form via pre-processing. Machine learning and deep learning algorithms, namely k-nearest neighbour (KNN) and decision trees, long short-term memory (LSTM), and deep autoencoders, were employed to detect CAN attacks. According to the findings of the experiments, using the decision tree and KNN algorithms as machine learning approaches resulted in accuracy levels of 98.80% and 99%, respectively. On the other hand, the use of LSTM and deep autoencoder algorithms as deep learning approaches resulted in accuracy levels of 96% and 99.98%, respectively. The maximum accuracy was achieved when using the decision tree and deep autoencoder algorithms. Statistical analysis methods were used to analyse the results of the classification algorithms, and the determination coefficient measurement for the deep autoencoder was found to reach a value of R2 = 95%. The performance of all of the models that were built in this way surpassed that of those already in use, with almost perfect levels of accuracy being achieved. The system developed is able to overcome security issues in IVNs. Full article
Show Figures

Figure 1

18 pages, 4062 KiB  
Article
Machine Learning-Enabled Smart Industrial Automation Systems Using Internet of Things
Sensors 2023, 23(1), 324; https://doi.org/10.3390/s23010324 - 28 Dec 2022
Cited by 14 | Viewed by 4900
Abstract
Industrial automation uses robotics and software to operate equipment and procedures across industries. Many applications integrate IoT, machine learning, and other technologies to provide smart features that improve the user experience. The use of such technology offers businesses and people tremendous assistance in [...] Read more.
Industrial automation uses robotics and software to operate equipment and procedures across industries. Many applications integrate IoT, machine learning, and other technologies to provide smart features that improve the user experience. The use of such technology offers businesses and people tremendous assistance in successfully achieving commercial and noncommercial requirements. Organizations are expected to automate industrial processes owing to the significant risk management and inefficiency of conventional processes. Hence, we developed an elaborative stepwise stacked artificial neural network (ESSANN) algorithm to greatly improve automation industries in controlling and monitoring the industrial environment. Initially, an industrial dataset provided by KLEEMANN Greece was used. The collected data were then preprocessed. Principal component analysis (PCA) was used to extract features, and feature selection was based on least absolute shrinkage and selection operator (LASSO). Subsequently, the ESSANN approach is proposed to improve automation industries. The performance of the proposed algorithm was also examined and compared with that of existing algorithms. The key factors compared with existing technologies are delay, network bandwidth, scalability, computation time, packet loss, operational cost, accuracy, precision, recall, and mean absolute error (MAE). Compared to traditional algorithms for industrial automation, our proposed techniques achieved high results, such as a delay of approximately 52%, network bandwidth accomplished at 97%, scalability attained at 96%, computation time acquired at 59 s, packet loss achieved at a minimum level of approximately 53%, an operational cost of approximately 59%, accuracy of 98%, precision of 98.95%, recall of 95.02%, and MAE of 80%. By analyzing the results, it can be seen that the proposed system was effectively implemented. Full article
Show Figures

Figure 1

19 pages, 2691 KiB  
Article
Best Fit DNA-Based Cryptographic Keys: The Genetic Algorithm Approach
Sensors 2022, 22(19), 7332; https://doi.org/10.3390/s22197332 - 27 Sep 2022
Cited by 45 | Viewed by 1866
Abstract
DNA (Deoxyribonucleic Acid) Cryptography has revolutionized information security by combining rigorous biological and mathematical concepts to encode original information in terms of a DNA sequence. Such schemes are crucially dependent on corresponding DNA-based cryptographic keys. However, owing to the redundancy or observable patterns, [...] Read more.
DNA (Deoxyribonucleic Acid) Cryptography has revolutionized information security by combining rigorous biological and mathematical concepts to encode original information in terms of a DNA sequence. Such schemes are crucially dependent on corresponding DNA-based cryptographic keys. However, owing to the redundancy or observable patterns, some of the keys are rendered weak as they are prone to intrusions. This paper proposes a Genetic Algorithm inspired method to strengthen weak keys obtained from Random DNA-based Key Generators instead of completely discarding them. Fitness functions and the application of genetic operators have been chosen and modified to suit DNA cryptography fundamentals in contrast to fitness functions for traditional cryptographic schemes. The crossover and mutation rates are reducing with each new population as more keys are passing fitness tests and need not be strengthened. Moreover, with the increasing size of the initial key population, the key space is getting highly exhaustive and less prone to Brute Force attacks. The paper demonstrates that out of an initial 25 × 25 population of DNA Keys, 14 keys are rendered weak. Complete results and calculations of how each weak key can be strengthened by generating 4 new populations are illustrated. The analysis of the proposed scheme for different initial populations shows that a maximum of 8 new populations has to be generated to strengthen all 500 weak keys of a 500 × 500 initial population. Full article
Show Figures

Figure 1

16 pages, 1161 KiB  
Article
Towards High Accuracy Pedestrian Detection on Edge GPUs
Sensors 2022, 22(16), 5980; https://doi.org/10.3390/s22165980 - 10 Aug 2022
Cited by 16 | Viewed by 1494
Abstract
Despite the rapid development of pedestrian detection algorithms, the balance between detection accuracy and efficiency is still far from being achieved due to edge GPUs (low computing power) limiting the parameters of the model. To address this issue, we propose the YOLOv4-TP-Tiny based [...] Read more.
Despite the rapid development of pedestrian detection algorithms, the balance between detection accuracy and efficiency is still far from being achieved due to edge GPUs (low computing power) limiting the parameters of the model. To address this issue, we propose the YOLOv4-TP-Tiny based on the YOLOv4 model, which mainly includes two modules, two-dimensional attention (TA) and pedestrian-based feature extraction (PFM). First, we integrate the TA mechanism into the backbone network, which increases the attention of the network to the visible area of pedestrians and improves the accuracy of pedestrian detection. Then, the PFM is used to replace the original spatial pyramid pooling (SPP) structure in the YOLOv4 to obtain the YOLOv4-TP algorithm, which can adapt to different sizes of people to obtain higher detection accuracy. To maintain detection speed, we replaced the normal convolution with a ghost network with a TA mechanism, resulting in more feature maps with fewer parameters. We constructed a one-way multi-scale feature fusion structure to replace the down-sampling process, thereby reducing network parameters to obtain the YOLOv4-TP-Tiny model. The experimental results show that the YOLOv4-TP-tiny has 58.3% AP and 31 FPS in the winder person pedestrian dataset. With the same hardware conditions and dataset, the AP of the YOLOv4-tiny is 55.9%, and the FPS is 29. Full article
(This article belongs to the Special Issue