sensors-logo

Journal Browser

Journal Browser

Feature Papers in the Internet of Things Section 2022

A special issue of Sensors (ISSN 1424-8220). This special issue belongs to the section "Internet of Things".

Deadline for manuscript submissions: closed (31 December 2022) | Viewed by 66813

Special Issue Editors


E-Mail Website
Guest Editor
Institute for Informatics and Telematics (IIT), National Research Council of Italy (CNR), Via G. Moruzzi, 1, I-56124 Pisa, Italy
Interests: MAC protocols for wireless networks; architectures and protocols for the Internet of Things; vehicular networks; 5G networks; smart transportation; smart grids and smart buildings
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
Dipartimento di Ingegneria Elettrica e delle Tecnologie dell’Informazione, Università degli Studi di Napoli Federico II, 80125 Naples, Italy
Interests: communication systems and networks test and measurement; measurements for Internet of Things applications; compressive sampling based measurements; measurements for Industry 4.0; measurement uncertainty
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
Mobile Multimedia Laboratory, Department of Informatics, School of Information Sciences and Technology, Athens University of Economics and Business, 104 34 Athens, Greece
Interests: access control; blockchain technologies; cryptography; information-centric networking; IoT; privacy; security; web technologies
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
Division of Network and Systems Engineering, School of Electrical Engineering and Computer Science, KTH Royal Institute of Technology, 114 28 Stockholm, Sweden
Interests: security of IoT; IIoT; cyber-physical systems and smart-grid, especially on LoRaWAN networks
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

We are pleased to announce that the Section Internet of Things is now compiling a collection of papers submitted by the Editorial Board Members (EBMs) of our section and outstanding scholars in this research field. We welcome contributions as well as recommendations from EBMs.

We expect original papers and review articles showing state-of-the-art, theoretical, and applicative advances, new experimental discoveries, and novel technological improvements regarding Internet of Things. We expect these papers to be widely read and highly influential within the field. All papers in this Special Issue will be collected into a printed edition book after the deadline and be well promoted.

We would also like to take this opportunity to call on more excellent scholars to join the Section Internet of Things so that we can work together to further develop this exciting field of research.

Dr. Raffaele Bruno
Prof. Dr. Leopoldo Angrisani
Dr. Nikos Fotiou
Dr. Ismail Butun
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Sensors is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Published Papers (21 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Jump to: Review, Other

20 pages, 8326 KiB  
Article
Power Efficient Machine Learning Models Deployment on Edge IoT Devices
by Anastasios Fanariotis, Theofanis Orphanoudakis, Konstantinos Kotrotsios, Vassilis Fotopoulos, George Keramidas and Panagiotis Karkazis
Sensors 2023, 23(3), 1595; https://doi.org/10.3390/s23031595 - 01 Feb 2023
Cited by 6 | Viewed by 3294
Abstract
Computing has undergone a significant transformation over the past two decades, shifting from a machine-based approach to a human-centric, virtually invisible service known as ubiquitous or pervasive computing. This change has been achieved by incorporating small embedded devices into a larger computational system, [...] Read more.
Computing has undergone a significant transformation over the past two decades, shifting from a machine-based approach to a human-centric, virtually invisible service known as ubiquitous or pervasive computing. This change has been achieved by incorporating small embedded devices into a larger computational system, connected through networking and referred to as edge devices. When these devices are also connected to the Internet, they are generally named Internet-of-Thing (IoT) devices. Developing Machine Learning (ML) algorithms on these types of devices allows them to provide Artificial Intelligence (AI) inference functions such as computer vision, pattern recognition, etc. However, this capability is severely limited by the device’s resource scarcity. Embedded devices have limited computational and power resources available while they must maintain a high degree of autonomy. While there are several published studies that address the computational weakness of these small systems-mostly through optimization and compression of neural networks- they often neglect the power consumption and efficiency implications of these techniques. This study presents power efficiency experimental results from the application of well-known and proven optimization methods using a set of well-known ML models. The results are presented in a meaningful manner considering the “real world” functionality of devices and the provided results are compared with the basic “idle” power consumption of each of the selected systems. Two different systems with completely different architectures and capabilities were used providing us with results that led to interesting conclusions related to the power efficiency of each architecture. Full article
(This article belongs to the Special Issue Feature Papers in the Internet of Things Section 2022)
Show Figures

Figure 1

18 pages, 6913 KiB  
Article
Inertial Sensor-Based Sport Activity Advisory System Using Machine Learning Algorithms
by Justyna Patalas-Maliszewska, Iwona Pajak, Pascal Krutz, Grzegorz Pajak, Matthias Rehm, Holger Schlegel and Martin Dix
Sensors 2023, 23(3), 1137; https://doi.org/10.3390/s23031137 - 19 Jan 2023
Cited by 8 | Viewed by 1748
Abstract
The aim of this study was to develop a physical activity advisory system supporting the correct implementation of sport exercises using inertial sensors and machine learning algorithms. Specifically, three mobile sensors (tags), six stationary anchors and a system-controlling server (gateway) were employed for [...] Read more.
The aim of this study was to develop a physical activity advisory system supporting the correct implementation of sport exercises using inertial sensors and machine learning algorithms. Specifically, three mobile sensors (tags), six stationary anchors and a system-controlling server (gateway) were employed for 15 scenarios of the series of subsequent activities, namely squats, pull-ups and dips. The proposed solution consists of two modules: an activity recognition module (ARM) and a repetition-counting module (RCM). The former is responsible for extracting the series of subsequent activities (so-called scenario), and the latter determines the number of repetitions of a given activity in a single series. Data used in this study contained 488 three defined sport activity occurrences. Data processing was conducted to enhance performance, including an overlapping and non-overlapping window, raw and normalized data, a convolutional neural network (CNN) with an additional post-processing block (PPB) and repetition counting. The developed system achieved satisfactory accuracy: CNN + PPB: non-overlapping window and raw data, 0.88; non-overlapping window and normalized data, 0.78; overlapping window and raw data, 0.92; overlapping window and normalized data, 0.87. For repetition counting, the achieved accuracies were 0.93 and 0.97 within an error of ±1 and ±2 repetitions, respectively. The archived results indicate that the proposed system could be a helpful tool to support the correct implementation of sport exercises and could be successfully implemented in further work in the form of web application detecting the user’s sport activity. Full article
(This article belongs to the Special Issue Feature Papers in the Internet of Things Section 2022)
Show Figures

Figure 1

16 pages, 1100 KiB  
Article
Technological Transformation of Telco Operators towards Seamless IoT Edge-Cloud Continuum
by Kasim Oztoprak, Yusuf Kursat Tuncel and Ismail Butun
Sensors 2023, 23(2), 1004; https://doi.org/10.3390/s23021004 - 15 Jan 2023
Cited by 11 | Viewed by 2477
Abstract
This article investigates and discusses challenges in the telecommunication field from multiple perspectives, both academic and industry sides are catered for, surveying the main points of technological transformation toward edge-cloud continuum from the view of a telco operator to show the complete picture, [...] Read more.
This article investigates and discusses challenges in the telecommunication field from multiple perspectives, both academic and industry sides are catered for, surveying the main points of technological transformation toward edge-cloud continuum from the view of a telco operator to show the complete picture, including the evolution of cloud-native computing, Software-Defined Networking (SDN), and network automation platforms. The cultural shift in software development and management with DevOps enabled the development of significant technologies in the telecommunication world, including network equipment, application development, and system orchestration. The effect of the aforementioned cultural shift to the application area, especially from the IoT point of view, is investigated. The enormous change in service diversity and delivery capabilities to mass devices are also discussed. During the last two decades, desktop and server virtualization has played an active role in the Information Technology (IT) world. With the use of OpenFlow, SDN, and Network Functions Virtualization (NFV), the network revolution has got underway. The shift from monolithic application development and deployment to micro-services changed the whole picture. On the other hand, the data centers evolved in several generations where the control plane cannot cope with all the networks without an intelligent decision-making process, benefiting from the AI/ML techniques. AI also enables operators to forecast demand more accurately, anticipate network load, and adjust capacity and throughput automatically. Going one step further, zero-touch networking and service management (ZSM) is proposed to get high-level human intents to generate a low-level configuration for network elements with validated results, minimizing the ratio of faults caused by human intervention. Harmonizing all signs of progress in different communication technologies enabled the use of edge computing successfully. Low-powered (from both energy and processing perspectives) IoT networks have disrupted the customer and end-point demands within the sector, as such paved the path towards devising the edge computing concept, which finalized the whole picture of the edge-cloud continuum. Full article
(This article belongs to the Special Issue Feature Papers in the Internet of Things Section 2022)
Show Figures

Figure 1

17 pages, 5928 KiB  
Article
A Low-Cost Hardware Architecture for EV Battery Cell Characterization Using an IoT-Based Platform
by Rafael Martínez-Sánchez, Ángel Molina-García, Alfonso P. Ramallo-González, Juan Sánchez-Valverde and Benito Úbeda-Miñarro
Sensors 2023, 23(2), 816; https://doi.org/10.3390/s23020816 - 10 Jan 2023
Cited by 5 | Viewed by 2076
Abstract
Since 1997, when the first hybrid vehicle was launched on the market, until today, the number of NIMH batteries that have been discarded due to their obsolescence has not stopped increasing, with an even faster rate more recently due to the progressive disappearance [...] Read more.
Since 1997, when the first hybrid vehicle was launched on the market, until today, the number of NIMH batteries that have been discarded due to their obsolescence has not stopped increasing, with an even faster rate more recently due to the progressive disappearance of thermal vehicles on the market. The battery technologies used are mostly NIMH for hybrid vehicles and Li ion for pure electric vehicles, making recycling difficult due to the hazardous materials they contain. For this reason, and with the aim of extending the life of the batteries, even including a second life within electric vehicle applications, this paper describes and evaluates a low-cost system to characterize individual cells of commercial electric vehicle batteries by identifying such abnormally performing cells that are out of use, minimizing regeneration costs in a more sustainable manner. A platform based on the IoT technology is developed, allowing the automation of charging and discharging cycles of each independent cell according to some parameters given by the user, and monitoring the real-time data of such battery cells. A case study based on a commercial Toyota Prius battery is also included in the paper. The results show the suitability of the proposed solution as an alternative way to characterize individual cells for subsequent electric vehicle applications, decreasing operating costs and providing an autonomous, flexible, and reliable system. Full article
(This article belongs to the Special Issue Feature Papers in the Internet of Things Section 2022)
Show Figures

Figure 1

22 pages, 1884 KiB  
Article
Delay-Packet-Loss-Optimized Distributed Routing Using Spiking Neural Network in Delay-Tolerant Networking
by Gandhimathi Velusamy and Ricardo Lent
Sensors 2023, 23(1), 310; https://doi.org/10.3390/s23010310 - 28 Dec 2022
Viewed by 2780
Abstract
Satellite communication is inevitable due to the Internet of Everything and the exponential increase in the usage of smart devices. Satellites have been used in many applications to make human life safe, secure, sophisticated, and more productive. The applications that benefit from satellite [...] Read more.
Satellite communication is inevitable due to the Internet of Everything and the exponential increase in the usage of smart devices. Satellites have been used in many applications to make human life safe, secure, sophisticated, and more productive. The applications that benefit from satellite communication are Earth observation (EO), military missions, disaster management, and 5G/6G integration, to name a few. These applications rely on the timely and accurate delivery of space data to ground stations. However, the channels between satellites and ground stations suffer attenuation caused by uncertain weather conditions and long delays due to line-of-sight constraints, congestion, and physical distance. Though inter-satellite links (ISLs) and inter-orbital links (IOLs) create multiple paths between satellite nodes, both ISLs and IOLs have the same issues. Some essential applications, such as EO, depend on time-sensitive and error-free data delivery, which needs better throughput connections. It is challenging to route space data to ground stations with better QoS by leveraging the ISLs and IOLs. Routing approaches that use the shortest path to optimize latency may cause packet losses and reduced throughput based on the channel conditions, while routing methods that try to avoid packet losses may end up delivering data with long delays. Existing routing algorithms that use multi-optimization goals tend to use priority-based optimization to optimize either of the metrics. However, critical satellite missions that depend on high-throughput and low-latency data delivery need routing approaches that optimize both metrics concurrently. We used a modified version of Kleinrock’s power metric to reduce delay and packet losses and verified it with experimental evaluations. We used a cognitive space routing approach, which uses a reinforcement-learning-based spiking neural network to implement routing strategies in NASA’s High Rate Delay Tolerant Networking (HDTN) project. Full article
(This article belongs to the Special Issue Feature Papers in the Internet of Things Section 2022)
Show Figures

Figure 1

14 pages, 3100 KiB  
Article
Water Meter Reading for Smart Grid Monitoring
by Fabio Martinelli, Francesco Mercaldo and Antonella Santone
Sensors 2023, 23(1), 75; https://doi.org/10.3390/s23010075 - 21 Dec 2022
Cited by 10 | Viewed by 3795
Abstract
Many tasks that require a large workforce are automated. In many areas of the world, the consumption of utilities, such as electricity, gas and water, is monitored by meters that need to be read by humans. The reading of such meters requires the [...] Read more.
Many tasks that require a large workforce are automated. In many areas of the world, the consumption of utilities, such as electricity, gas and water, is monitored by meters that need to be read by humans. The reading of such meters requires the presence of an employee or a representative of the utility provider. Automatic meter reading is crucial in the implementation of smart grids. For this reason, with the aim to boost the implementation of the smart grid paradigm, in this paper, we propose a method aimed to automatically read digits from a dial meter. In detail, the proposed method aims to localise the dial meter from an image, to detect the digits and to classify the digits. Deep learning is exploited, and, in particular, the YOLOv5s model is considered for the localisation of digits and for their recognition. An experimental real-world case study is presented to confirm the effectiveness of the proposed method for automatic digit localisation recognition from dial meters. Full article
(This article belongs to the Special Issue Feature Papers in the Internet of Things Section 2022)
Show Figures

Figure 1

22 pages, 470 KiB  
Article
Recent Advances in Artificial Intelligence and Tactical Autonomy: Current Status, Challenges, and Perspectives
by Desta Haileselassie Hagos and Danda B. Rawat
Sensors 2022, 22(24), 9916; https://doi.org/10.3390/s22249916 - 16 Dec 2022
Cited by 5 | Viewed by 4922
Abstract
This paper presents the findings of detailed and comprehensive technical literature aimed at identifying the current and future research challenges of tactical autonomy. It discusses in great detail the current state-of-the-art powerful artificial intelligence (AI), machine learning (ML), and robot technologies, and their [...] Read more.
This paper presents the findings of detailed and comprehensive technical literature aimed at identifying the current and future research challenges of tactical autonomy. It discusses in great detail the current state-of-the-art powerful artificial intelligence (AI), machine learning (ML), and robot technologies, and their potential for developing safe and robust autonomous systems in the context of future military and defense applications. Additionally, we discuss some of the technical and operational critical challenges that arise when attempting to practically build fully autonomous systems for advanced military and defense applications. Our paper provides the state-of-the-art advanced AI methods available for tactical autonomy. To the best of our knowledge, this is the first work that addresses the important current trends, strategies, critical challenges, tactical complexities, and future research directions of tactical autonomy. We believe this work will greatly interest researchers and scientists from academia and the industry working in the field of robotics and the autonomous systems community. We hope this work encourages researchers across multiple disciplines of AI to explore the broader tactical autonomy domain. We also hope that our work serves as an essential step toward designing advanced AI and ML models with practical implications for real-world military and defense settings. Full article
(This article belongs to the Special Issue Feature Papers in the Internet of Things Section 2022)
Show Figures

Figure 1

12 pages, 1045 KiB  
Article
Modeling Driver Behavior in Road Traffic Simulation
by Teodora Mecheva, Radoslav Furnadzhiev and Nikolay Kakanakov
Sensors 2022, 22(24), 9801; https://doi.org/10.3390/s22249801 - 14 Dec 2022
Cited by 2 | Viewed by 1917
Abstract
Driver behavior models are an important part of road traffic simulation modeling. They encompass characteristics such as mood, fatigue, and response to distracting conditions. The relationships between external factors and the way drivers perform tasks can also be represented in models. This article [...] Read more.
Driver behavior models are an important part of road traffic simulation modeling. They encompass characteristics such as mood, fatigue, and response to distracting conditions. The relationships between external factors and the way drivers perform tasks can also be represented in models. This article proposes a methodology for establishing parameters of driver behavior models. The methodology is based on road traffic data and determines the car-following model and routing algorithm and their parameters that best describe driving habits. Sequential and parallel implementation of the methodology through the urban mobility simulator SUMO and Python are proposed. Four car-following models and three routing algorithms and their parameters are investigated. The results of the performed simulations prove the applicability of the methodology. Based on more than 7000 simulations performed, it is concluded that in future experiments of the traffic in Plovdiv it is appropriate to use a Contraction Hierarchies routing algorithm with the default routing step and the Krauss car-following model with the default configuration parameters. Full article
(This article belongs to the Special Issue Feature Papers in the Internet of Things Section 2022)
Show Figures

Figure 1

27 pages, 1819 KiB  
Article
Applied Machine Learning for IIoT and Smart Production—Methods to Improve Production Quality, Safety and Sustainability
by Attila Frankó, Gergely Hollósi, Dániel Ficzere and Pal Varga
Sensors 2022, 22(23), 9148; https://doi.org/10.3390/s22239148 - 25 Nov 2022
Cited by 3 | Viewed by 3277
Abstract
Industrial IoT (IIoT) has revolutionized production by making data available to stakeholders at many levels much faster, with much greater granularity than ever before. When it comes to smart production, the aim of analyzing the collected data is usually to achieve greater efficiency [...] Read more.
Industrial IoT (IIoT) has revolutionized production by making data available to stakeholders at many levels much faster, with much greater granularity than ever before. When it comes to smart production, the aim of analyzing the collected data is usually to achieve greater efficiency in general, which includes increasing production but decreasing waste and using less energy. Furthermore, the boost in communication provided by IIoT requires special attention to increased levels of safety and security. The growth in machine learning (ML) capabilities in the last few years has affected smart production in many ways. The current paper provides an overview of applying various machine learning techniques for IIoT, smart production, and maintenance, especially in terms of safety, security, asset localization, quality assurance and sustainability aspects. The approach of the paper is to provide a comprehensive overview on the ML methods from an application point of view, hence each domain—namely security and safety, asset localization, quality control, maintenance—has a dedicated chapter, with a concluding table on the typical ML techniques and the related references. The paper summarizes lessons learned, and identifies research gaps and directions for future work. Full article
(This article belongs to the Special Issue Feature Papers in the Internet of Things Section 2022)
Show Figures

Figure 1

16 pages, 765 KiB  
Article
Selective Content Retrieval in Information-Centric Networking
by José Quevedo and Daniel Corujo
Sensors 2022, 22(22), 8742; https://doi.org/10.3390/s22228742 - 12 Nov 2022
Cited by 4 | Viewed by 1455
Abstract
Recently, novel networking architectures have emerged to cope with the fast-evolving and new Internet utilisation patterns. Information-Centric Networking (ICN) is a prominent example of this architecture. By perceiving content as the core element of the networking functionalities, ICN opens up a whole new [...] Read more.
Recently, novel networking architectures have emerged to cope with the fast-evolving and new Internet utilisation patterns. Information-Centric Networking (ICN) is a prominent example of this architecture. By perceiving content as the core element of the networking functionalities, ICN opens up a whole new avenue of information exchange optimisation possibilities. This paper presents an approach that progresses the base operation of ICN and leverages content identification right at the network layer, allowing to selectively retrieve partial pieces of information from content already present in ICN in-network caches. Additionally, this proposal enables information producers to seamlessly offload some content processing tasks into the network. The concept is discussed and demonstrated through a proof-of-concept prototype targeting an Internet of Things (IoT) scenario, where consumers retrieve specific pieces of the whole information generated by sensors. The obtained results showcase reduced traffic and storage consumption at the core of the network. Full article
(This article belongs to the Special Issue Feature Papers in the Internet of Things Section 2022)
Show Figures

Figure 1

19 pages, 674 KiB  
Article
ECO6G: Energy and Cost Analysis for Network Slicing Deployment in Beyond 5G Networks
by Anurag Thantharate, Ankita Vijay Tondwalkar, Cory Beard and Andres Kwasinski
Sensors 2022, 22(22), 8614; https://doi.org/10.3390/s22228614 - 08 Nov 2022
Cited by 6 | Viewed by 1710
Abstract
Fifth-generation (5G) wireless technology promises to be the critical enabler of use cases far beyond smartphones and other connected devices. This next-generation 5G wireless standard represents the changing face of connectivity by enabling elevated levels of automation through continuous optimization of several Key [...] Read more.
Fifth-generation (5G) wireless technology promises to be the critical enabler of use cases far beyond smartphones and other connected devices. This next-generation 5G wireless standard represents the changing face of connectivity by enabling elevated levels of automation through continuous optimization of several Key Performance Indicators (KPIs) such as latency, reliability, connection density, and energy efficiency. Mobile Network Operators (MNOs) must promote and implement innovative technologies and solutions to reduce network energy consumption while delivering high-speed and low-latency services to deploy energy-efficient 5G networks with a reduced carbon footprint. This research evaluates an energy-saving method using data-driven learning through load estimation for Beyond 5G (B5G) networks. The proposed ‘ECO6G’ model utilizes a supervised Machine Learning (ML) approach for forecasting traffic load and uses the estimated load to evaluate the energy efficiency and OPEX savings. The simulation results demonstrate a comparative analysis between the traditional time-series forecasting methods and the proposed ML model that utilizes learned parameters. Our ECO6G dataset is captured from measurements on a real-world operational 5G base station (BS). We showcase simulations using our ECO6G model for a given dataset and demonstrate that the proposed ECO6G model is accurate within $4.3 million over 100,000 BSs over 5 years compared to three other models that would increase OPEX cost from $370 million to $1.87 billion during varying network load scenarios against other data-driven and statistical learning models. Full article
(This article belongs to the Special Issue Feature Papers in the Internet of Things Section 2022)
Show Figures

Figure 1

9 pages, 2389 KiB  
Communication
Multiple Fingerprinting Localization by an Artificial Neural Network
by Jaehyun Yoo
Sensors 2022, 22(19), 7505; https://doi.org/10.3390/s22197505 - 03 Oct 2022
Cited by 4 | Viewed by 1477
Abstract
Fingerprinting localization is a promising indoor positioning methods thanks to its advantage of using preinstalled infrastructure. For example, WiFi signal strength can be measured by pre-existing WiFi routers. In the offline phase, the fingerprinting localization method first stores of position and RSSI measurement [...] Read more.
Fingerprinting localization is a promising indoor positioning methods thanks to its advantage of using preinstalled infrastructure. For example, WiFi signal strength can be measured by pre-existing WiFi routers. In the offline phase, the fingerprinting localization method first stores of position and RSSI measurement pairs in a dataset. Second, it predicts a target’s location by comparing the stored fingerprint database to the current measurement. The database size is normally huge, and data patterns are complicated; thus, an artificial neural network is used to model the relationship of fingerprints and locations. The existing fingerprinting locations, however, have been developed to predict only single locations. In practice, many users may require positioning services, and as such, the core algorithm should be capable of multiple localizations, which is the main contribution of this paper. In this paper, multiple fingerprinting localization is developed based on an artificial neural network and an analysis of the number of targets that can be estimated without loss of accuracy is conducted by experiments. Full article
(This article belongs to the Special Issue Feature Papers in the Internet of Things Section 2022)
Show Figures

Figure 1

18 pages, 2906 KiB  
Article
Motion Shield: An Automatic Notifications System for Vehicular Communications
by Petros Balios, Philotas Kyriakidis, Stelios Zimeras, Petros S. Bithas and Lambros Sarakis
Sensors 2022, 22(6), 2419; https://doi.org/10.3390/s22062419 - 21 Mar 2022
Viewed by 2191
Abstract
Motion Shield is an automatic crash notification system that uses a mobile phone to generate automatic alerts related to the safety of a user when the user is boarding a means of transportation. The objective of Motion Shield is to improve road safety [...] Read more.
Motion Shield is an automatic crash notification system that uses a mobile phone to generate automatic alerts related to the safety of a user when the user is boarding a means of transportation. The objective of Motion Shield is to improve road safety by considering a moving vehicle’s risk, estimating the probability of an emergency, and assessing the likelihood of an accident. The system, using multiple sources of external information, the mobile phone sensors’ readings, geolocated information, weather data, and historical evidence of traffic accidents, processes a plethora of parameters in order to predict the onset of an accident and act preventively. All the collected data are forwarded into a decision support system which dynamically calculates the mobility risk and driving behavior aspects in order to proactively send personalized notifications and alerts to the user and a public safety answering point (PSAP) (112). Full article
(This article belongs to the Special Issue Feature Papers in the Internet of Things Section 2022)
Show Figures

Figure 1

16 pages, 5832 KiB  
Article
SVIoT: A Secure Visual-IoT Framework for Smart Healthcare
by Javaid A. Kaw, Solihah Gull and Shabir A. Parah
Sensors 2022, 22(5), 1773; https://doi.org/10.3390/s22051773 - 24 Feb 2022
Cited by 5 | Viewed by 2330
Abstract
The advancement of the Internet of Things (IoT) has transfigured the overlay of the physical world by superimposing digital information in various sectors, including smart cities, industry, healthcare, etc. Among the various shared information, visual data are an insensible part of smart cities, [...] Read more.
The advancement of the Internet of Things (IoT) has transfigured the overlay of the physical world by superimposing digital information in various sectors, including smart cities, industry, healthcare, etc. Among the various shared information, visual data are an insensible part of smart cities, especially in healthcare. As a result, visual-IoT research is gathering momentum. In visual-IoT, visual sensors, such as cameras, collect critical multimedia information about industries, healthcare, shopping, autonomous vehicles, crowd management, etc. In healthcare, patient-related data are captured and then transmitted via insecure transmission lines. The security of this data are of paramount importance. Besides the fact that visual data requires a large bandwidth, the gap between communication and computation is an additional challenge for visual IoT system development. In this paper, we present SVIoT, a Secure Visual-IoT framework, which addresses the issues of both data security and resource constraints in IoT-based healthcare. This was achieved by proposing a novel reversible data hiding (RDH) scheme based on One Dimensional Neighborhood Mean Interpolation (ODNMI). The use of ODNMI reduces the computational complexity and storage/bandwidth requirements by 50 percent. We upscaled the original image from M × N to M ± 2N, dissimilar to conventional interpolation methods, wherein images are upscaled to 2M × 2N. We made use of an innovative mechanism, Left Data Shifting (LDS), before embedding data in the cover image. Before embedding the data, we encrypted it using an AES-128 encryption algorithm to offer additional security. The use of LDS ensures better perceptual quality at a relatively high payload. We achieved an average PSNR of 43 dB for a payload of 1.5 bpp (bits per pixel). In addition, we embedded a fragile watermark in the cover image to ensure authentication of the received content. Full article
(This article belongs to the Special Issue Feature Papers in the Internet of Things Section 2022)
Show Figures

Figure 1

23 pages, 3614 KiB  
Article
Joint Communications and Sensing Employing Multi- or Single-Carrier OFDM Communication Signals: A Tutorial on Sensing Methods, Recent Progress and a Novel Design
by Kai Wu, Jian Andrew Zhang, Xiaojing Huang and Yingjie Jay Guo
Sensors 2022, 22(4), 1613; https://doi.org/10.3390/s22041613 - 18 Feb 2022
Cited by 8 | Viewed by 3482
Abstract
Joint communications and sensing (JCAS) has recently attracted extensive attention due to its potential in substantially improving the cost, energy and spectral efficiency of Internet of Things (IoT) systems that need both radio frequency functions. Given the wide applicability of orthogonal frequency division [...] Read more.
Joint communications and sensing (JCAS) has recently attracted extensive attention due to its potential in substantially improving the cost, energy and spectral efficiency of Internet of Things (IoT) systems that need both radio frequency functions. Given the wide applicability of orthogonal frequency division multiplexing (OFDM) in modern communications, OFDM sensing has become one of the major research topics of JCAS. To raise the awareness of some critical yet long-overlooked issues that restrict the OFDM sensing capability, a comprehensive overview of OFDM sensing is provided first in this paper, and then a tutorial on the issues is presented. Moreover, some recent research efforts for addressing the issues are reviewed, with interesting designs and results highlighted. In addition, the redundancy in OFDM sensing signals is unveiled, on which, a novel method is based and developed in order to remove the redundancy by introducing efficient signal decimation. Corroborated by analysis and simulation results, the new method further reduces the sensing complexity over one of the most efficient methods to date, with a minimal impact on the sensing performance. Full article
(This article belongs to the Special Issue Feature Papers in the Internet of Things Section 2022)
Show Figures

Figure 1

29 pages, 19618 KiB  
Article
Automated License Plate Recognition for Resource-Constrained Environments
by Heshan Padmasiri, Jithmi Shashirangana, Dulani Meedeniya, Omer Rana and Charith Perera
Sensors 2022, 22(4), 1434; https://doi.org/10.3390/s22041434 - 13 Feb 2022
Cited by 22 | Viewed by 7306
Abstract
The incorporation of deep-learning techniques in embedded systems has enhanced the capabilities of edge computing to a great extent. However, most of these solutions rely on high-end hardware and often require a high processing capacity, which cannot be achieved with resource-constrained edge computing. [...] Read more.
The incorporation of deep-learning techniques in embedded systems has enhanced the capabilities of edge computing to a great extent. However, most of these solutions rely on high-end hardware and often require a high processing capacity, which cannot be achieved with resource-constrained edge computing. This study presents a novel approach and a proof of concept for a hardware-efficient automated license plate recognition system for a constrained environment with limited resources. The proposed solution is purely implemented for low-resource edge devices and performed well for extreme illumination changes such as day and nighttime. The generalisability of the proposed models has been achieved using a novel set of neural networks for different hardware configurations based on the computational capabilities and low cost. The accuracy, energy efficiency, communication, and computational latency of the proposed models are validated using different license plate datasets in the daytime and nighttime and in real time. Meanwhile, the results obtained from the proposed study have shown competitive performance to the state-of-the-art server-grade hardware solutions as well. Full article
(This article belongs to the Special Issue Feature Papers in the Internet of Things Section 2022)
Show Figures

Figure 1

20 pages, 1093 KiB  
Article
Vehicle Localization Using Doppler Shift and Time of Arrival Measurements in a Tunnel Environment
by Rreze Halili, Noori BniLam, Marwan Yusuf, Emmeric Tanghe, Wout Joseph, Maarten Weyn and Rafael Berkvens
Sensors 2022, 22(3), 847; https://doi.org/10.3390/s22030847 - 22 Jan 2022
Cited by 8 | Viewed by 3286
Abstract
Most applications and services of Cooperative Intelligent Transport Systems (C-ITS) rely on accurate and continuous vehicle location information. The traditional localization method based on the Global Navigation Satellite System (GNSS) is the most commonly used. However, it does not provide reliable, continuous, and [...] Read more.
Most applications and services of Cooperative Intelligent Transport Systems (C-ITS) rely on accurate and continuous vehicle location information. The traditional localization method based on the Global Navigation Satellite System (GNSS) is the most commonly used. However, it does not provide reliable, continuous, and accurate positioning in all scenarios, such as tunnels. Therefore, in this work, we present an algorithm that exploits the existing Vehicle-to-Infrastructure (V2I) communication channel that operates within the LTE-V frequency band to acquire in-tunnel vehicle location information. We propose a novel solution for vehicle localization based on Doppler shift and Time of Arrival measurements. Measurements performed in the Beveren tunnel in Antwerp, Belgium, are used to obtain results. A comparison between estimated positions using Extended Kalman Filter (EKF) on Doppler shift measurements and individual Kalman Filter (KF) on Doppler shift and Time of Arrival measurements is carried out to analyze the filtering methods performance. Findings show that the EKF performs better than KF, reducing the average estimation error by 10 m, while the algorithm accuracy depends on the relevant RF channel propagation conditions and other in-tunnel-related environment knowledge included in the estimation. The proposed solution can be used for monitoring the position and speed of vehicles driving in tunnel environments. Full article
(This article belongs to the Special Issue Feature Papers in the Internet of Things Section 2022)
Show Figures

Figure 1

14 pages, 434 KiB  
Article
Privacy-Preserving Human Action Recognition with a Many-Objective Evolutionary Algorithm
by Pau Climent-Pérez and Francisco Florez-Revuelta
Sensors 2022, 22(3), 764; https://doi.org/10.3390/s22030764 - 20 Jan 2022
Cited by 4 | Viewed by 2303
Abstract
Wrist-worn devices equipped with accelerometers constitute a non-intrusive way to achieve active and assisted living (AAL) goals, such as automatic journaling for self-reflection, i.e., lifelogging, as well as to provide other services, such as general health and wellbeing monitoring, personal autonomy assessment, among [...] Read more.
Wrist-worn devices equipped with accelerometers constitute a non-intrusive way to achieve active and assisted living (AAL) goals, such as automatic journaling for self-reflection, i.e., lifelogging, as well as to provide other services, such as general health and wellbeing monitoring, personal autonomy assessment, among others. Human action recognition (HAR), and in particular, the recognition of activities of daily living (ADLs), can be used for these types of assessment or journaling. In this paper, a many-objective evolutionary algorithm (MaOEA) is used in order to maximise action recognition from individuals while concealing (minimising recognition of) gender and age. To validate the proposed method, the PAAL accelerometer signal ADL dataset (v2.0) is used, which includes data from 52 participants (26 men and 26 women) and 24 activity class labels. The results show a drop in gender and age recognition to 58% (from 89%, a 31% drop), and to 39% (from 83%, a 44% drop), respectively; while action recognition stays closer to the initial value of 68% (from: 87%, i.e., 19% down). Full article
(This article belongs to the Special Issue Feature Papers in the Internet of Things Section 2022)
Show Figures

Figure 1

Review

Jump to: Research, Other

33 pages, 917 KiB  
Review
Vehicular Platoon Communication: Architecture, Security Threats and Open Challenges
by Sean Joe Taylor, Farhan Ahmad, Hoang Nga Nguyen and Siraj Ahmed Shaikh
Sensors 2023, 23(1), 134; https://doi.org/10.3390/s23010134 - 23 Dec 2022
Cited by 8 | Viewed by 3269
Abstract
The emerging technology that is vehicular platooning is an exciting technology. It promises to save space on congested roadways, improve safety and utilise less fuel for transporting goods, reducing greenhouse gas emissions. The technology has already been shown to be vulnerable to attack [...] Read more.
The emerging technology that is vehicular platooning is an exciting technology. It promises to save space on congested roadways, improve safety and utilise less fuel for transporting goods, reducing greenhouse gas emissions. The technology has already been shown to be vulnerable to attack and exploitation by attackers. Attackers have several attack surfaces available for exploitation to achieve their goals (either personal or financial). The goal of this paper and its contribution to the area of research is to present the attacks and defence mechanisms for vehicular platoons and put risks of existing identified attacks forwards. Here the variety of attacks that have been identified in the literature are presented and how they compromise the wireless communications of vehicle platoons. As part of this, a risk assessment is presented to assess the risk factor of the attacks. Finally, this paper presents the range of defence and countermeasures to vehicle platooning attacks and how they protect the safe operations of vehicular platoons. Full article
(This article belongs to the Special Issue Feature Papers in the Internet of Things Section 2022)
Show Figures

Figure 1

Other

Jump to: Research, Review

27 pages, 643 KiB  
Systematic Review
Model-Driven Engineering Techniques and Tools for Machine Learning-Enabled IoT Applications: A Scoping Review
by Zahra Mardani Korani, Armin Moin, Alberto Rodrigues da Silva and João Carlos Ferreira
Sensors 2023, 23(3), 1458; https://doi.org/10.3390/s23031458 - 28 Jan 2023
Cited by 4 | Viewed by 3779
Abstract
This paper reviews the literature on model-driven engineering (MDE) tools and languages for the internet of things (IoT). Due to the abundance of big data in the IoT, data analytics and machine learning (DAML) techniques play a key role in providing smart IoT [...] Read more.
This paper reviews the literature on model-driven engineering (MDE) tools and languages for the internet of things (IoT). Due to the abundance of big data in the IoT, data analytics and machine learning (DAML) techniques play a key role in providing smart IoT applications. In particular, since a significant portion of the IoT data is sequential time series data, such as sensor data, time series analysis techniques are required. Therefore, IoT modeling languages and tools are expected to support DAML methods, including time series analysis techniques, out of the box. In this paper, we study and classify prior work in the literature through the mentioned lens and following the scoping review approach. Hence, the key underlying research questions are what MDE approaches, tools, and languages have been proposed and which ones have supported DAML techniques at the modeling level and in the scope of smart IoT services. Full article
(This article belongs to the Special Issue Feature Papers in the Internet of Things Section 2022)
Show Figures

Figure 1

20 pages, 977 KiB  
Systematic Review
Artificial Intelligence of Things Applied to Assistive Technology: A Systematic Literature Review
by Maurício Pasetto de Freitas, Vinícius Aquino Piai, Ricardo Heffel Farias, Anita M. R. Fernandes, Anubis Graciela de Moraes Rossetto and Valderi Reis Quietinho Leithardt
Sensors 2022, 22(21), 8531; https://doi.org/10.3390/s22218531 - 05 Nov 2022
Cited by 12 | Viewed by 5659
Abstract
According to the World Health Organization, about 15% of the world’s population has some form of disability. Assistive Technology, in this context, contributes directly to the overcoming of difficulties encountered by people with disabilities in their daily lives, allowing them to receive education [...] Read more.
According to the World Health Organization, about 15% of the world’s population has some form of disability. Assistive Technology, in this context, contributes directly to the overcoming of difficulties encountered by people with disabilities in their daily lives, allowing them to receive education and become part of the labor market and society in a worthy manner. Assistive Technology has made great advances in its integration with Artificial Intelligence of Things (AIoT) devices. AIoT processes and analyzes the large amount of data generated by Internet of Things (IoT) devices and applies Artificial Intelligence models, specifically, machine learning, to discover patterns for generating insights and assisting in decision making. Based on a systematic literature review, this article aims to identify the machine-learning models used across different research on Artificial Intelligence of Things applied to Assistive Technology. The survey of the topics approached in this article also highlights the context of such research, their application, the IoT devices used, and gaps and opportunities for further development. The survey results show that 50% of the analyzed research address visual impairment, and, for this reason, most of the topics cover issues related to computational vision. Portable devices, wearables, and smartphones constitute the majority of IoT devices. Deep neural networks represent 81% of the machine-learning models applied in the reviewed research. Full article
(This article belongs to the Special Issue Feature Papers in the Internet of Things Section 2022)
Show Figures

Figure 1

Back to TopTop