Next Issue
Volume 16, April
Previous Issue
Volume 16, February
 
 

Future Internet, Volume 16, Issue 3 (March 2024) – 40 articles

Cover Story (view full-size image): The study proposes a pre-signature scheme for enabling vehicle-to-vehicle trust in rural areas with low RSU adoption. Integrating this scheme into existing standards, such as the Security Credential Management System, improves reputation dissemination management regardless of the sparse RSU coverage. Utilizing the traffic simulation tool SUMO, the study simulates a 24 h scenario to evaluate the performance of the pre-signature operation in rural areas. The study reveals the effect of three properties—communication range, RSU density, and overnight vehicle connectivity—on V2V efficiency in such a challenging environment. Results improved reputation access with these factors and reduced reliance on pre-signatures as RSU availability increases, benefiting areas with limited infrastructure. View this paper
  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Reader to open them.
Order results
Result details
Section
Select all
Export citation of selected articles as:
21 pages, 445 KiB  
Systematic Review
Factors Affecting Trust and Acceptance for Blockchain Adoption in Digital Payment Systems: A Systematic Review
by Tenzin Norbu, Joo Yeon Park, Kok Wai Wong and Hui Cui
Future Internet 2024, 16(3), 106; https://doi.org/10.3390/fi16030106 - 21 Mar 2024
Viewed by 911
Abstract
Blockchain technology has become significant for financial sectors, especially digital payment systems, offering enhanced security, transparency, and efficiency. However, there is limited research on the factors influencing user trust in and acceptance of blockchain adoption in digital payment systems. This systematic review provides [...] Read more.
Blockchain technology has become significant for financial sectors, especially digital payment systems, offering enhanced security, transparency, and efficiency. However, there is limited research on the factors influencing user trust in and acceptance of blockchain adoption in digital payment systems. This systematic review provides insight into the key factors impacting consumers’ perceptions and behaviours towards embracing blockchain technology. A total of 1859 studies were collected, with 48 meeting the criteria for comprehensive analysis. The results showed that security, privacy, transparency, and regulation are the most significant factors influencing trust for blockchain adoption. The most influential factors identified in the Unified Theory of Acceptance and Use of Technology (UTAUT) model include performance expectancy, effort expectancy, social influence, and facilitating conditions. Incorporating a trust and acceptance model could be a viable approach to tackling obstacles and ensuring the successful integration of blockchain technology into digital payment systems. Understanding these factors is crucial for creating a favourable atmosphere for adopting blockchain technology in digital payments. User-perspective research on blockchain adoption in digital payment systems is still insufficient, and this aspect still requires further investigation. Blockchain adoption in digital payment systems has not been sufficiently conducted from the user’s perspective, and there is a scope for it to be carried out. This review aims to shed light on the factors of trust in and acceptance of blockchain adoption in digital payment systems so that the full potential of blockchain technology can be realised. Understanding these factors and their intricate connections is imperative in fostering a conducive environment for the widespread acceptance of blockchain technology in digital payments. Full article
(This article belongs to the Special Issue Machine Learning for Blockchain and IoT Systems in Smart City)
Show Figures

Figure 1

18 pages, 1164 KiB  
Article
UAV Control Method Combining Reptile Meta-Reinforcement Learning and Generative Adversarial Imitation Learning
by Shui Jiang, Yanning Ge, Xu Yang, Wencheng Yang and Hui Cui
Future Internet 2024, 16(3), 105; https://doi.org/10.3390/fi16030105 - 20 Mar 2024
Viewed by 628
Abstract
Reinforcement learning (RL) is pivotal in empowering Unmanned Aerial Vehicles (UAVs) to navigate and make decisions efficiently and intelligently within complex and dynamic surroundings. Despite its significance, RL is hampered by inherent limitations such as low sample efficiency, restricted generalization capabilities, and a [...] Read more.
Reinforcement learning (RL) is pivotal in empowering Unmanned Aerial Vehicles (UAVs) to navigate and make decisions efficiently and intelligently within complex and dynamic surroundings. Despite its significance, RL is hampered by inherent limitations such as low sample efficiency, restricted generalization capabilities, and a heavy reliance on the intricacies of reward function design. These challenges often render single-method RL approaches inadequate, particularly in the context of UAV operations where high costs and safety risks in real-world applications cannot be overlooked. To address these issues, this paper introduces a novel RL framework that synergistically integrates meta-learning and imitation learning. By leveraging the Reptile algorithm from meta-learning and Generative Adversarial Imitation Learning (GAIL), coupled with state normalization techniques for processing state data, this framework significantly enhances the model’s adaptability. It achieves this by identifying and leveraging commonalities across various tasks, allowing for swift adaptation to new challenges without the need for complex reward function designs. To ascertain the efficacy of this integrated approach, we conducted simulation experiments within both two-dimensional environments. The empirical results clearly indicate that our GAIL-enhanced Reptile method surpasses conventional single-method RL algorithms in terms of training efficiency. This evidence underscores the potential of combining meta-learning and imitation learning to surmount the traditional barriers faced by reinforcement learning in UAV trajectory planning and decision-making processes. Full article
Show Figures

Figure 1

32 pages, 28962 KiB  
Review
Using Computer Vision to Collect Information on Cycling and Hiking Trails Users
by Joaquim Miguel, Pedro Mendonça, Agnelo Quelhas, João M. L. P. Caldeira and Vasco N. G. J. Soares
Future Internet 2024, 16(3), 104; https://doi.org/10.3390/fi16030104 - 20 Mar 2024
Viewed by 1290
Abstract
Hiking and cycling have become popular activities for promoting well-being and physical activity. Portugal has been investing in hiking and cycling trail infrastructures to boost sustainable tourism. However, the lack of reliable data on the use of these trails means that the times [...] Read more.
Hiking and cycling have become popular activities for promoting well-being and physical activity. Portugal has been investing in hiking and cycling trail infrastructures to boost sustainable tourism. However, the lack of reliable data on the use of these trails means that the times of greatest affluence or the type of user who makes the most use of them are not recorded. These data are of the utmost importance to the managing bodies, with which they can adjust their actions to improve the management, maintenance, promotion, and use of the infrastructures for which they are responsible. The aim of this work is to present a review study on projects, techniques, and methods that can be used to identify and count the different types of users on these trails. The most promising computer vision techniques are identified and described: YOLOv3-Tiny, MobileNet-SSD V2, and FasterRCNN with ResNet-50. Their performance is evaluated and compared. The results observed can be very useful for proposing future prototypes. The challenges, future directions, and research opportunities are also discussed. Full article
Show Figures

Figure 1

31 pages, 2605 KiB  
Article
Intelligent Resource Orchestration for 5G Edge Infrastructures
by Rafael Moreno-Vozmediano, Rubén S. Montero, Eduardo Huedo and Ignacio M. Llorente
Future Internet 2024, 16(3), 103; https://doi.org/10.3390/fi16030103 - 19 Mar 2024
Viewed by 797
Abstract
The adoption of edge infrastructure in 5G environments stands out as a transformative technology aimed at meeting the increasing demands of latency-sensitive and data-intensive applications. This research paper presents a comprehensive study on the intelligent orchestration of 5G edge computing infrastructures. The proposed [...] Read more.
The adoption of edge infrastructure in 5G environments stands out as a transformative technology aimed at meeting the increasing demands of latency-sensitive and data-intensive applications. This research paper presents a comprehensive study on the intelligent orchestration of 5G edge computing infrastructures. The proposed Smart 5G Edge-Cloud Management Architecture, built upon an OpenNebula foundation, incorporates a ONEedge5G experimental component, which offers intelligent workload forecasting and infrastructure orchestration and automation capabilities, for optimal allocation of virtual resources across diverse edge locations. The research evaluated different forecasting models, based both on traditional statistical techniques and machine learning techniques, comparing their accuracy in CPU usage prediction for a dataset of virtual machines (VMs). Additionally, an integer linear programming formulation was proposed to solve the optimization problem of mapping VMs to physical servers in distributed edge infrastructure. Different optimization criteria such as minimizing server usage, load balancing, and reducing latency violations were considered, along with mapping constraints. Comprehensive tests and experiments were conducted to evaluate the efficacy of the proposed architecture. Full article
(This article belongs to the Special Issue Edge Intelligence: Edge Computing for 5G and the Internet of Things)
Show Figures

Figure 1

24 pages, 2749 KiB  
Article
UP-SDCG: A Method of Sensitive Data Classification for Collaborative Edge Computing in Financial Cloud Environment
by Lijun Zu, Wenyu Qi, Hongyi Li, Xiaohua Men, Zhihui Lu, Jiawei Ye and Liang Zhang
Future Internet 2024, 16(3), 102; https://doi.org/10.3390/fi16030102 - 18 Mar 2024
Viewed by 767
Abstract
The digital transformation of banks has led to a paradigm shift, promoting the open sharing of data and services with third-party providers through APIs, SDKs, and other technological means. While data sharing brings personalized, convenient, and enriched services to users, it also introduces [...] Read more.
The digital transformation of banks has led to a paradigm shift, promoting the open sharing of data and services with third-party providers through APIs, SDKs, and other technological means. While data sharing brings personalized, convenient, and enriched services to users, it also introduces security risks, including sensitive data leakage and misuse, highlighting the importance of data classification and grading as the foundational pillar of security. This paper presents a cloud-edge collaborative banking data open application scenario, focusing on the critical need for an accurate and automated sensitive data classification and categorization method. The regulatory outpost module addresses this requirement, aiming to enhance the precision and efficiency of data classification. Firstly, regulatory policies impose strict requirements concerning data protection. Secondly, the sheer volume of business and the complexity of the work situation make it impractical to rely on manual experts, as they incur high labor costs and are unable to guarantee significant accuracy. Therefore, we propose a scheme UP-SDCG for automatically classifying and grading financially sensitive structured data. We developed a financial data hierarchical classification library. Additionally, we employed library augmentation technology and implemented a synonym discrimination model. We conducted an experimental analysis using simulation datasets, where UP-SDCG achieved precision surpassing 95%, outperforming the other three comparison models. Moreover, we performed real-world testing in financial institutions, achieving good detection results in customer data, supervision, and additional in personally sensitive information, aligning with application goals. Our ongoing work will extend the model’s capabilities to encompass unstructured data classification and grading, broadening the scope of application. Full article
Show Figures

Figure 1

2 pages, 129 KiB  
Editorial
Edge and Fog Computing for the Internet of Things
by Alessandro Pozzebon
Future Internet 2024, 16(3), 101; https://doi.org/10.3390/fi16030101 - 16 Mar 2024
Viewed by 893
Abstract
Over the last years few years, the number of interconnected devices within the context of Internet of Things (IoT) has rapidly grown; some statistics state that the total number of IoT-connected devices in 2023 has reached the groundbreaking number of 17 billion [...] [...] Read more.
Over the last years few years, the number of interconnected devices within the context of Internet of Things (IoT) has rapidly grown; some statistics state that the total number of IoT-connected devices in 2023 has reached the groundbreaking number of 17 billion [...] Full article
(This article belongs to the Special Issue Edge and Fog Computing for the Internet of Things)
16 pages, 7101 KiB  
Article
Application Scenarios of Digital Twins for Smart Crop Farming through Cloud–Fog–Edge Infrastructure
by Yogeswaranathan Kalyani, Liam Vorster, Rebecca Whetton and Rem Collier
Future Internet 2024, 16(3), 100; https://doi.org/10.3390/fi16030100 - 16 Mar 2024
Viewed by 814
Abstract
In the last decade, digital twin (DT) technology has received considerable attention across various domains, such as manufacturing, smart healthcare, and smart cities. The digital twin represents a digital representation of a physical entity, object, system, or process. Although it is relatively new [...] Read more.
In the last decade, digital twin (DT) technology has received considerable attention across various domains, such as manufacturing, smart healthcare, and smart cities. The digital twin represents a digital representation of a physical entity, object, system, or process. Although it is relatively new in the agricultural domain, it has gained increasing attention recently. Recent reviews of DTs show that this technology has the potential to revolutionise agriculture management and activities. It can also provide numerous benefits to all agricultural stakeholders, including farmers, agronomists, researchers, and others, in terms of making decisions on various agricultural processes. In smart crop farming, DTs help simulate various farming tasks like irrigation, fertilisation, nutrient management, and pest control, as well as access real-time data and guide farmers through ‘what-if’ scenarios. By utilising the latest technologies, such as cloud–fog–edge computing, multi-agent systems, and the semantic web, farmers can access real-time data and analytics. This enables them to make accurate decisions about optimising their processes and improving efficiency. This paper presents a proposed architectural framework for DTs, exploring various potential application scenarios that integrate this architecture. It also analyses the benefits and challenges of implementing this technology in agricultural environments. Additionally, we investigate how cloud–fog–edge computing contributes to developing decentralised, real-time systems essential for effective management and monitoring in agriculture. Full article
Show Figures

Figure 1

30 pages, 1063 KiB  
Article
Linked Open Government Data: Still a Viable Option for Sharing and Integrating Public Data?
by Alfonso Quarati and Riccardo Albertoni
Future Internet 2024, 16(3), 99; https://doi.org/10.3390/fi16030099 - 15 Mar 2024
Viewed by 888
Abstract
Linked Data (LD) principles, when applied to Open Government Data (OGD), aim to make government data accessible and interconnected, unlocking its full potential and facilitating widespread reuse. As a modular and scalable solution to fragmented government data, Linked Open Government Data (LOGD) improve [...] Read more.
Linked Data (LD) principles, when applied to Open Government Data (OGD), aim to make government data accessible and interconnected, unlocking its full potential and facilitating widespread reuse. As a modular and scalable solution to fragmented government data, Linked Open Government Data (LOGD) improve citizens’ understanding of government functions while promoting greater data interoperability, ultimately leading to more efficient government processes. However, despite promising developments in the early 2010s, including the release of LOGD datasets by some government agencies, and studies and methodological proposals by numerous scholars, a cursory examination of government websites and portals suggests that interest in this technology has gradually waned. Given the initial expectations surrounding LOGD, this paper goes beyond a superficial analysis and provides a deeper insight into the evolution of interest in LOGD by raising questions about the extent to which the dream of LD has influenced the reality of OGD and whether it remains sustainable. Full article
Show Figures

Figure 1

13 pages, 395 KiB  
Article
Efficient and Secure Distributed Data Storage and Retrieval Using Interplanetary File System and Blockchain
by Muhammad Bin Saif, Sara Migliorini and Fausto Spoto
Future Internet 2024, 16(3), 98; https://doi.org/10.3390/fi16030098 - 15 Mar 2024
Viewed by 907
Abstract
Blockchain technology has been successfully applied in recent years to promote the immutability, traceability, and authenticity of previously collected and stored data. However, the amount of data stored in the blockchain is usually limited for economic and technological issues. Namely, the blockchain usually [...] Read more.
Blockchain technology has been successfully applied in recent years to promote the immutability, traceability, and authenticity of previously collected and stored data. However, the amount of data stored in the blockchain is usually limited for economic and technological issues. Namely, the blockchain usually stores only a fingerprint of data, such as the hash of data, while full, raw information is stored off-chain. This is generally enough to guarantee immutability and traceability, but misses to support another important property, that is, data availability. This is particularly true when a traditional, centralized database is chosen for off-chain storage. For this reason, many proposals try to properly combine blockchain with decentralized IPFS storage. However, the storage of data on IPFS could pose some privacy problems. This paper proposes a solution that properly combines blockchain, IPFS, and encryption techniques to guarantee immutability, traceability, availability, and data privacy. Full article
(This article belongs to the Special Issue Blockchain and Web 3.0: Applications, Challenges and Future Trends)
Show Figures

Figure 1

20 pages, 1837 KiB  
Article
Detection of Forged Images Using a Combination of Passive Methods Based on Neural Networks
by Ancilon Leuch Alencar, Marcelo Dornbusch Lopes, Anita Maria da Rocha Fernandes, Julio Cesar Santos dos Anjos, Juan Francisco De Paz Santana and Valderi Reis Quietinho Leithardt
Future Internet 2024, 16(3), 97; https://doi.org/10.3390/fi16030097 - 14 Mar 2024
Viewed by 835
Abstract
In the current era of social media, the proliferation of images sourced from unreliable origins underscores the pressing need for robust methods to detect forged content, particularly amidst the rapid evolution of image manipulation technologies. Existing literature delineates two primary approaches to image [...] Read more.
In the current era of social media, the proliferation of images sourced from unreliable origins underscores the pressing need for robust methods to detect forged content, particularly amidst the rapid evolution of image manipulation technologies. Existing literature delineates two primary approaches to image manipulation detection: active and passive. Active techniques intervene preemptively, embedding structures into images to facilitate subsequent authenticity verification, whereas passive methods analyze image content for traces of manipulation. This study presents a novel solution to image manipulation detection by leveraging a multi-stream neural network architecture. Our approach harnesses three convolutional neural networks (CNNs) operating on distinct data streams extracted from the original image. We have developed a solution based on two passive detection methodologies. The system utilizes two separate streams to extract specific data subsets, while a third stream processes the unaltered image. Each net independently processes its respective data stream, capturing diverse facets of the image. The outputs from these nets are then fused through concatenation to ascertain whether the image has undergone manipulation, yielding a comprehensive detection framework surpassing the efficacy of its constituent methods. Our work introduces a unique dataset derived from the fusion of four publicly available datasets, featuring organically manipulated images that closely resemble real-world scenarios. This dataset offers a more authentic representation than other state-of-the-art methods that use algorithmically generated datasets based on image patches. By encompassing genuine manipulation scenarios, our dataset enhances the model’s ability to generalize across varied manipulation techniques, thereby improving its performance in real-world settings. After training, the merged approach obtained an accuracy of 89.59% in the set of validation images, significantly higher than the model trained with only unaltered images, which obtained 78.64%, and the two other models trained using images with a feature selection method applied to enhance inconsistencies that obtained 68.02% for Error-Level Analysis images and 50.70% for the method using Discrete Wavelet Transform. Moreover, our proposed approach exhibits reduced accuracy variance compared to alternative models, underscoring its stability and robustness across diverse datasets. The approach outlined in this work needs to provide information about the specific location or type of tempering, which limits its practical applications. Full article
(This article belongs to the Special Issue Secure Communication Protocols for Future Computing)
Show Figures

Figure 1

19 pages, 2195 KiB  
Article
A Method for 5G–ICN Seamless Mobility Support Based on Router Buffered Data
by Mengchi Xing, Haojiang Deng and Rui Han
Future Internet 2024, 16(3), 96; https://doi.org/10.3390/fi16030096 - 13 Mar 2024
Viewed by 750
Abstract
The 5G core network adopts a Control and User Plane Separation (CUPS) architecture to meet the challenges of low-latency business requirements. In this architecture, a balance between management costs and User Experience (UE) is achieved by moving User Plane Function (UPF) to the [...] Read more.
The 5G core network adopts a Control and User Plane Separation (CUPS) architecture to meet the challenges of low-latency business requirements. In this architecture, a balance between management costs and User Experience (UE) is achieved by moving User Plane Function (UPF) to the edge of the network. However, cross-UPF handover during communication between the UE and the remote server will cause TCP/IP session interruption and affect continuity of delay-sensitive real-time communication continuity. Information-Centric Networks (ICNs) separate identity and location, and their ability to route based on identity can effectively handle mobility. Therefore, based on the 5G-ICN architecture, we propose a seamless mobility support method based on router buffered data (BDMM), making full use of the ICN’s identity-based routing capabilities to solve the problem of UE cross-UPF handover affecting business continuity. BDMM also uses the ICN router data buffering capabilities to reduce packet loss during handovers. We design a dynamic buffer resource allocation strategy (DBRAS) that can adjust the buffer resource allocation results in time according to network traffic changes and business types to solve the problem of unreasonable buffer resource allocation. Finally, experimental results show that our method outperforms other methods in terms of average packet delay, weighted average packet loss rate, and network overhead. In addition, our method also has good performance in average handover delay. Full article
Show Figures

Figure 1

13 pages, 2566 KiB  
Article
Personalized Federated Learning with Adaptive Feature Extraction and Category Prediction in Non-IID Datasets
by Ying-Hsun Lai, Shin-Yeh Chen, Wen-Chi Chou, Hua-Yang Hsu and Han-Chieh Chao
Future Internet 2024, 16(3), 95; https://doi.org/10.3390/fi16030095 - 11 Mar 2024
Viewed by 891
Abstract
Federated learning trains a neural network model using the client’s data to maintain the benefits of centralized model training while maintaining their privacy. However, if the client data are not independently and identically distributed (non-IID) because of different environments, the accuracy of the [...] Read more.
Federated learning trains a neural network model using the client’s data to maintain the benefits of centralized model training while maintaining their privacy. However, if the client data are not independently and identically distributed (non-IID) because of different environments, the accuracy of the model may suffer from client drift during training owing to discrepancies in each client’s data. This study proposes a personalized federated learning algorithm based on the concept of multitask learning to divide each client model into two layers: a feature extraction layer and a category prediction layer. The feature extraction layer maps the input data to a low-dimensional feature vector space. Furthermore, the parameters of the neural network are aggregated with those of other clients using an adaptive method. The category prediction layer maps low-dimensional feature vectors to the label sample space, with its parameters remaining unaffected by other clients to maintain client uniqueness. The proposed personalized federated learning method produces faster learning model convergence rates and higher accuracy rates for the non-IID datasets in our experiments. Full article
(This article belongs to the Collection Machine Learning Approaches for User Identity)
Show Figures

Figure 1

17 pages, 1160 KiB  
Article
Dynamic Industrial Optimization: A Framework Integrates Online Machine Learning for Processing Parameters Design
by Yu Yao and Quan Qian
Future Internet 2024, 16(3), 94; https://doi.org/10.3390/fi16030094 - 10 Mar 2024
Viewed by 963
Abstract
We develop the online process parameter design (OPPD) framework for efficiently handling streaming data collected from industrial automation equipment. This framework integrates online machine learning, concept drift detection and Bayesian optimization techniques. Initially, concept drift detection mitigates the impact of anomalous data on [...] Read more.
We develop the online process parameter design (OPPD) framework for efficiently handling streaming data collected from industrial automation equipment. This framework integrates online machine learning, concept drift detection and Bayesian optimization techniques. Initially, concept drift detection mitigates the impact of anomalous data on model updates. Data without concept drift are used for online model training and updating, enabling accurate predictions for the next processing cycle. Bayesian optimization is then employed for inverse optimization and process parameter design. Within OPPD, we introduce the online accelerated support vector regression (OASVR) algorithm for enhanced computational efficiency and model accuracy. OASVR simplifies support vector regression, boosting both speed and durability. Furthermore, we incorporate a dynamic window mechanism to regulate the training data volume for adapting to real-time demands posed by diverse online scenarios. Concept drift detection uses the EI-kMeans algorithm, and the Bayesian inverse design employs an upper confidence bound approach with an adaptive learning rate. Applied to single-crystal fabrication, the OPPD framework outperforms other models, with an RMSE of 0.12, meeting precision demands in production. Full article
(This article belongs to the Section Big Data and Augmented Intelligence)
Show Figures

Graphical abstract

13 pages, 2798 KiB  
Article
IMBA: IoT-Mist Bat-Inspired Algorithm for Optimising Resource Allocation in IoT Networks
by Ziyad Almudayni, Ben Soh and Alice Li
Future Internet 2024, 16(3), 93; https://doi.org/10.3390/fi16030093 - 08 Mar 2024
Viewed by 740
Abstract
The advent of the Internet of Things (IoT) has revolutionised our interaction with the environment, facilitating seamless connections among sensors, actuators, and humans. Efficient task scheduling stands as a cornerstone in maximising resource utilisation and ensuring timely task execution in IoT systems. The [...] Read more.
The advent of the Internet of Things (IoT) has revolutionised our interaction with the environment, facilitating seamless connections among sensors, actuators, and humans. Efficient task scheduling stands as a cornerstone in maximising resource utilisation and ensuring timely task execution in IoT systems. The implementation of efficient task scheduling methodologies can yield substantial enhancements in productivity and cost-effectiveness for IoT infrastructures. To that end, this paper presents the IoT-mist bat-inspired algorithm (IMBA), designed specifically to optimise resource allocation in IoT environments. IMBA’s efficacy lies in its ability to elevate user service quality through enhancements in task completion rates, load distribution, network utilisation, processing time, and power efficiency. Through comparative analysis, IMBA demonstrates superiority over traditional methods, such as fuzzy logic and round-robin algorithms, across all performance metrics. Full article
Show Figures

Figure 1

21 pages, 2970 KiB  
Article
User Experience, Functionality and Aesthetics Evaluation in an Academic Multi-Site Web Ecosystem
by Andreas Giannakoulopoulos, Minas Pergantis and Aristeidis Lamprogeorgos
Future Internet 2024, 16(3), 92; https://doi.org/10.3390/fi16030092 - 08 Mar 2024
Viewed by 828
Abstract
The present study focuses on using qualitative and quantitative data to evaluate the functionality, user experience (UX), and aesthetic approach offered by an academic multi-site Web ecosystem consisting of multiple interconnected websites. Large entities in various industry fields often have the need for [...] Read more.
The present study focuses on using qualitative and quantitative data to evaluate the functionality, user experience (UX), and aesthetic approach offered by an academic multi-site Web ecosystem consisting of multiple interconnected websites. Large entities in various industry fields often have the need for an elaborate Web presence. In an effort to address the challenges posed by this need specifically in the field of academia, the authors developed, over a period of many years, a multi-site ecosystem within the Ionian University, which focuses on interconnectivity and a collaborative approach to academic content management. This system, known as “Publish@Ionio”, uses a singular content management infrastructure to allow for the creation of content for different websites that share both information and resources while at the same time allowing for individual variations in both functionality and aesthetics. The ecosystem was evaluated through quantitative data from its operation and qualitative feedback from a focus-group interview with experts, including website editors and administrative staff. The collected data were used to assess the strengths and weaknesses of the multi-site approach based on the actions and needs of the individuals in charge of generating content. The study led to conclusions on the advantages that interoperability offers in terms of digital and human resource management, the benefits of a unified aesthetic approach that allows for variability, and the necessity of collaborative content management tools that are tailored to the content’s nature. Full article
Show Figures

Figure 1

27 pages, 1086 KiB  
Article
Implementing Internet of Things Service Platforms with Network Function Virtualization Serverless Technologies
by Mauro Femminella and Gianluca Reali
Future Internet 2024, 16(3), 91; https://doi.org/10.3390/fi16030091 - 08 Mar 2024
Viewed by 1144
Abstract
The need for adaptivity and scalability in telecommunication systems has led to the introduction of a software-based approach to networking, in which network functions are virtualized and implemented in software modules, based on network function virtualization (NFV) technologies. The growing demand for low [...] Read more.
The need for adaptivity and scalability in telecommunication systems has led to the introduction of a software-based approach to networking, in which network functions are virtualized and implemented in software modules, based on network function virtualization (NFV) technologies. The growing demand for low latency, efficiency, flexibility and security has placed some limitations on the adoption of these technologies, due to some problems of traditional virtualization solutions. However, the introduction of lightweight virtualization approaches is paving the way for new and better infrastructures for implementing network functions. This article discusses these new virtualization solutions and shows a proposal, based on serverless computing, that uses them to implement container-based virtualized network functions for the delivery of advanced Internet of Things (IoT) services. It includes open source software components to implement both the virtualization layer, implemented through Firecracker, and the runtime environment, based on Kata containers. A set of experiments shows that the proposed approach is fast, in order to boost new network functions, and more efficient than some baseline solutions, with minimal resource footprint. Therefore, it is an excellent candidate to implement NFV functions in the edge deployment of serverless services for the IoT. Full article
(This article belongs to the Special Issue Applications of Wireless Sensor Networks and Internet of Things)
Show Figures

Figure 1

23 pages, 1867 KiB  
Article
The Varieties of Agency in Human–Smart Device Relationships: The Four Agency Profiles
by Heidi Toivonen and Francesco Lelli
Future Internet 2024, 16(3), 90; https://doi.org/10.3390/fi16030090 - 07 Mar 2024
Cited by 1 | Viewed by 1145
Abstract
This paper investigates how users of smart devices attribute agency both to themselves and to their devices. Statistical analyses, tag cloud analysis, and sentiment analysis were applied on survey data collected from 587 participants. As a result of a preliminary factorial analysis, two [...] Read more.
This paper investigates how users of smart devices attribute agency both to themselves and to their devices. Statistical analyses, tag cloud analysis, and sentiment analysis were applied on survey data collected from 587 participants. As a result of a preliminary factorial analysis, two independent constructs of agency emerged: (i) user agency and (ii) device agency. These two constructs received further support from a sentiment analysis and a tag cloud analysis conducted on the written responses provided in a survey. We also studied how user agency and device agency relate to various background variables, such as the user’s professional knowledge of smart devices. We present a new preliminary model, where the two agency constructs are used to conceptualize agency in human–smart device relationships in a matrix composed of a controller, collaborator, detached, and victim. Our model with the constructs of user agency and device agency fosters a richer understanding of the users’ experiences in their interactions with devices. The results could facilitate designing interfaces that better take into account the users’ views of their own capabilities as well as the capacities of their devices; the findings can assist in tackling challenges such as the feeling of lacking agency experienced by technologically savvy users. Full article
(This article belongs to the Section Techno-Social Smart Systems)
Show Figures

Figure 1

17 pages, 2344 KiB  
Article
An Advanced Path Planning and UAV Relay System: Enhancing Connectivity in Rural Environments
by Mostafa El Debeiki, Saba Al-Rubaye, Adolfo Perrusquía, Christopher Conrad and Juan Alejandro Flores-Campos
Future Internet 2024, 16(3), 89; https://doi.org/10.3390/fi16030089 - 06 Mar 2024
Viewed by 954
Abstract
The use of unmanned aerial vehicles (UAVs) is increasing in transportation applications due to their high versatility and maneuverability in complex environments. Search and rescue is one of the most challenging applications of UAVs due to the non-homogeneous nature of the environmental and [...] Read more.
The use of unmanned aerial vehicles (UAVs) is increasing in transportation applications due to their high versatility and maneuverability in complex environments. Search and rescue is one of the most challenging applications of UAVs due to the non-homogeneous nature of the environmental and communication landscapes. In particular, mountainous areas pose difficulties due to the loss of connectivity caused by large valleys and the volumes of hazardous weather. In this paper, the connectivity issue in mountainous areas is addressed using a path planning algorithm for UAV relay. The approach is based on two main phases: (1) the detection of areas of interest where the connectivity signal is poor, and (2) an energy-aware and resilient path planning algorithm that maximizes the coverage links. The approach uses a viewshed analysis to identify areas of visibility between the areas of interest and the cell-towers. This allows the construction of a blockage map that prevents the UAV from passing through areas with no coverage, whilst maximizing the coverage area under energy constraints and hazardous weather. The proposed approach is validated under open-access datasets of mountainous zones, and the obtained results confirm the benefits of the proposed approach for communication networks in remote and challenging environments. Full article
Show Figures

Figure 1

38 pages, 10789 KiB  
Article
Dragon_Pi: IoT Side-Channel Power Data Intrusion Detection Dataset and Unsupervised Convolutional Autoencoder for Intrusion Detection
by Dominic Lightbody, Duc-Minh Ngo, Andriy Temko, Colin C. Murphy and Emanuel Popovici
Future Internet 2024, 16(3), 88; https://doi.org/10.3390/fi16030088 - 05 Mar 2024
Viewed by 1176
Abstract
The growth of the Internet of Things (IoT) has led to a significant rise in cyber attacks and an expanded attack surface for the average consumer. In order to protect consumers and infrastructure, research into detecting malicious IoT activity must be of the [...] Read more.
The growth of the Internet of Things (IoT) has led to a significant rise in cyber attacks and an expanded attack surface for the average consumer. In order to protect consumers and infrastructure, research into detecting malicious IoT activity must be of the highest priority. Security research in this area has two key issues: the lack of datasets for training artificial intelligence (AI)-based intrusion detection models and the fact that most existing datasets concentrate only on one type of network traffic. Thus, this study introduces Dragon_Pi, an intrusion detection dataset designed for IoT devices based on side-channel power consumption data. Dragon_Pi comprises a collection of normal and under-attack power consumption traces from separate testbeds featuring a DragonBoard 410c and a Raspberry Pi. Dragon_Slice is trained on this dataset; it is an unsupervised convolutional autoencoder (CAE) trained exclusively on held-out normal slices from Dragon_Pi for anomaly detection. The Dragon_Slice network has two iterations in this study. The original achieves 0.78 AUC without post-processing and 0.876 AUC with post-processing. A second iteration of Dragon_Slice, utilising dropout to further impede the CAE’s ability to reconstruct anomalies, outperforms the original network with a raw AUC of 0.764 and a post-processed AUC of 0.89. Full article
(This article belongs to the Special Issue Security and Privacy in Blockchains and the IoT III)
Show Figures

Figure 1

15 pages, 3855 KiB  
Article
Advanced Techniques for Geospatial Referencing in Online Media Repositories
by Dominik Warch, Patrick Stellbauer and Pascal Neis
Future Internet 2024, 16(3), 87; https://doi.org/10.3390/fi16030087 - 01 Mar 2024
Viewed by 1009
Abstract
In the digital transformation era, video media libraries’ untapped potential is immense, restricted primarily by their non-machine-readable nature and basic search functionalities limited to standard metadata. This study presents a novel multimodal methodology that utilizes advances in artificial intelligence, including neural networks, computer [...] Read more.
In the digital transformation era, video media libraries’ untapped potential is immense, restricted primarily by their non-machine-readable nature and basic search functionalities limited to standard metadata. This study presents a novel multimodal methodology that utilizes advances in artificial intelligence, including neural networks, computer vision, and natural language processing, to extract and geocode geospatial references from videos. Leveraging the geospatial information from videos enables semantic searches, enhances search relevance, and allows for targeted advertising, particularly on mobile platforms. The methodology involves a comprehensive process, including data acquisition from ARD Mediathek, image and text analysis using advanced machine learning models, and audio and subtitle processing with state-of-the-art linguistic models. Despite challenges like model interpretability and the complexity of geospatial data extraction, this study’s findings indicate significant potential for advancing the precision of spatial data analysis within video content, promising to enrich media libraries with more navigable, contextually rich content. This advancement has implications for user engagement, targeted services, and broader urban planning and cultural heritage applications. Full article
Show Figures

Figure 1

25 pages, 10389 KiB  
Article
Towards a Hybrid Security Framework for Phishing Awareness Education and Defense
by Peter K. K. Loh, Aloysius Z. Y. Lee and Vivek Balachandran
Future Internet 2024, 16(3), 86; https://doi.org/10.3390/fi16030086 - 01 Mar 2024
Viewed by 1157
Abstract
The rise in generative Artificial Intelligence (AI) has led to the development of more sophisticated phishing email attacks, as well as an increase in research on using AI to aid the detection of these advanced attacks. Successful phishing email attacks severely impact businesses, [...] Read more.
The rise in generative Artificial Intelligence (AI) has led to the development of more sophisticated phishing email attacks, as well as an increase in research on using AI to aid the detection of these advanced attacks. Successful phishing email attacks severely impact businesses, as employees are usually the vulnerable targets. Defense against such attacks, therefore, requires realizing defense along both technological and human vectors. Security hardening research work along the technological vector is few and focuses mainly on the use of machine learning and natural language processing to distinguish between machine- and human-generated text. Common existing approaches to harden security along the human vector consist of third-party organized training programmes, the content of which needs to be updated over time. There is, to date, no reported approach that provides both phishing attack detection and progressive end-user training. In this paper, we present our contribution, which includes the design and development of an integrated approach that employs AI-assisted and generative AI platforms for phishing attack detection and continuous end-user education in a hybrid security framework. This framework supports scenario-customizable and evolving user education in dealing with increasingly advanced phishing email attacks. The technological design and functional details for both platforms are presented and discussed. Performance tests showed that the phishing attack detection sub-system using the Convolutional Neural Network (CNN) deep learning model architecture achieved the best overall results: above 94% accuracy, above 95% precision, and above 94% recall. Full article
(This article belongs to the Special Issue Information and Future Internet Security, Trust and Privacy II)
Show Figures

Figure 1

19 pages, 1639 KiB  
Article
Security Threats and Promising Solutions Arising from the Intersection of AI and IoT: A Study of IoMT and IoET Applications
by Hadeel Alrubayyi, Moudy Sharaf Alshareef, Zunaira Nadeem, Ahmed M. Abdelmoniem and Mona Jaber
Future Internet 2024, 16(3), 85; https://doi.org/10.3390/fi16030085 - 29 Feb 2024
Viewed by 1005
Abstract
The hype of the Internet of Things as an enabler for intelligent applications and related promise for ushering accessibility, efficiency, and quality of service is met with hindering security and data privacy concerns. It follows that such IoT systems, which are empowered by [...] Read more.
The hype of the Internet of Things as an enabler for intelligent applications and related promise for ushering accessibility, efficiency, and quality of service is met with hindering security and data privacy concerns. It follows that such IoT systems, which are empowered by artificial intelligence, need to be investigated with cognisance of security threats and mitigation schemes that are tailored to their specific constraints and requirements. In this work, we present a comprehensive review of security threats in IoT and emerging countermeasures with a particular focus on malware and man-in-the-middle attacks. Next, we elaborate on two use cases: the Internet of Energy Things and the Internet of Medical Things. Innovative artificial intelligence methods for automating energy theft detection and stress levels are first detailed, followed by an examination of contextual security threats and privacy breach concerns. An artificial immune system is employed to mitigate the risk of malware attacks, differential privacy is proposed for data protection, and federated learning is harnessed to reduce data exposure. Full article
(This article belongs to the Special Issue Cyber Security in the New "Edge Computing + IoT" World)
Show Figures

Figure 1

22 pages, 2142 KiB  
Review
SoK: Analysis Techniques for WebAssembly
by Håkon Harnes and Donn Morrison
Future Internet 2024, 16(3), 84; https://doi.org/10.3390/fi16030084 - 29 Feb 2024
Viewed by 1052
Abstract
WebAssembly is a low-level bytecode language that enables high-level languages like C, C++, and Rust to be executed in the browser at near-native performance. In recent years, WebAssembly has gained widespread adoption and is now natively supported by all modern browsers. Despite its [...] Read more.
WebAssembly is a low-level bytecode language that enables high-level languages like C, C++, and Rust to be executed in the browser at near-native performance. In recent years, WebAssembly has gained widespread adoption and is now natively supported by all modern browsers. Despite its benefits, WebAssembly has introduced significant security challenges, primarily due to vulnerabilities inherited from memory-unsafe source languages. Moreover, the use of WebAssembly extends beyond traditional web applications to smart contracts on blockchain platforms, where vulnerabilities have led to significant financial losses. WebAssembly has also been used for malicious purposes, like cryptojacking, where website visitors’ hardware resources are used for crypto mining without their consent. To address these issues, several analysis techniques for WebAssembly binaries have been proposed. This paper presents a systematic review of these analysis techniques, focusing on vulnerability analysis, cryptojacking detection, and smart contract security. The analysis techniques are categorized into static, dynamic, and hybrid methods, evaluating their strengths and weaknesses based on quantitative data. Our findings reveal that static techniques are efficient but may struggle with complex binaries, while dynamic techniques offer better detection at the cost of increased overhead. Hybrid approaches, which merge the strengths of static and dynamic methods, are not extensively used in the literature and emerge as a promising direction for future research. Lastly, this paper identifies potential future research directions based on the state of the current literature. Full article
(This article belongs to the Section Cybersecurity)
Show Figures

Figure 1

17 pages, 5387 KiB  
Article
Edge-Enhanced TempoFuseNet: A Two-Stream Framework for Intelligent Multiclass Video Anomaly Recognition in 5G and IoT Environments
by Gulshan Saleem, Usama Ijaz Bajwa, Rana Hammad Raza and Fan Zhang
Future Internet 2024, 16(3), 83; https://doi.org/10.3390/fi16030083 - 29 Feb 2024
Viewed by 909
Abstract
Surveillance video analytics encounters unprecedented challenges in 5G and IoT environments, including complex intra-class variations, short-term and long-term temporal dynamics, and variable video quality. This study introduces Edge-Enhanced TempoFuseNet, a cutting-edge framework that strategically reduces spatial resolution to allow the processing of low-resolution [...] Read more.
Surveillance video analytics encounters unprecedented challenges in 5G and IoT environments, including complex intra-class variations, short-term and long-term temporal dynamics, and variable video quality. This study introduces Edge-Enhanced TempoFuseNet, a cutting-edge framework that strategically reduces spatial resolution to allow the processing of low-resolution images. A dual upscaling methodology based on bicubic interpolation and an encoder–bank–decoder configuration is used for anomaly classification. The two-stream architecture combines the power of a pre-trained Convolutional Neural Network (CNN) for spatial feature extraction from RGB imagery in the spatial stream, while the temporal stream focuses on learning short-term temporal characteristics, reducing the computational burden of optical flow. To analyze long-term temporal patterns, the extracted features from both streams are combined and routed through a Gated Recurrent Unit (GRU) layer. The proposed framework (TempoFuseNet) outperforms the encoder–bank–decoder model in terms of performance metrics, achieving a multiclass macro average accuracy of 92.28%, an F1-score of 69.29%, and a false positive rate of 4.41%. This study presents a significant advancement in the field of video anomaly recognition and provides a comprehensive solution to the complex challenges posed by real-world surveillance scenarios in the context of 5G and IoT. Full article
(This article belongs to the Special Issue Edge Intelligence: Edge Computing for 5G and the Internet of Things)
Show Figures

Figure 1

19 pages, 3172 KiB  
Article
Multi-Level Split Federated Learning for Large-Scale AIoT System Based on Smart Cities
by Hanyue Xu, Kah Phooi Seng, Jeremy Smith and Li Minn Ang
Future Internet 2024, 16(3), 82; https://doi.org/10.3390/fi16030082 - 28 Feb 2024
Viewed by 1227
Abstract
In the context of smart cities, the integration of artificial intelligence (AI) and the Internet of Things (IoT) has led to the proliferation of AIoT systems, which handle vast amounts of data to enhance urban infrastructure and services. However, the collaborative training of [...] Read more.
In the context of smart cities, the integration of artificial intelligence (AI) and the Internet of Things (IoT) has led to the proliferation of AIoT systems, which handle vast amounts of data to enhance urban infrastructure and services. However, the collaborative training of deep learning models within these systems encounters significant challenges, chiefly due to data privacy concerns and dealing with communication latency from large-scale IoT devices. To address these issues, multi-level split federated learning (multi-level SFL) has been proposed, merging the benefits of split learning (SL) and federated learning (FL). This framework introduces a novel multi-level aggregation architecture that reduces communication delays, enhances scalability, and addresses system and statistical heterogeneity inherent in large AIoT systems with non-IID data distributions. The architecture leverages the Message Queuing Telemetry Transport (MQTT) protocol to cluster IoT devices geographically and employs edge and fog computing layers for initial model parameter aggregation. Simulation experiments validate that the multi-level SFL outperforms traditional SFL by improving model accuracy and convergence speed in large-scale, non-IID environments. This paper delineates the proposed architecture, its workflow, and its advantages in enhancing the robustness and scalability of AIoT systems in smart cities while preserving data privacy. Full article
Show Figures

Figure 1

23 pages, 737 KiB  
Article
A Synergistic Elixir-EDA-MQTT Framework for Advanced Smart Transportation Systems
by Yushan Li and Satoshi Fujita
Future Internet 2024, 16(3), 81; https://doi.org/10.3390/fi16030081 - 28 Feb 2024
Viewed by 893
Abstract
This paper proposes a novel event-driven architecture for enhancing edge-based vehicular systems within smart transportation. Leveraging the inherent real-time, scalable, and fault-tolerant nature of the Elixir language, we present an innovative architecture tailored for edge computing. This architecture employs MQTT for efficient event [...] Read more.
This paper proposes a novel event-driven architecture for enhancing edge-based vehicular systems within smart transportation. Leveraging the inherent real-time, scalable, and fault-tolerant nature of the Elixir language, we present an innovative architecture tailored for edge computing. This architecture employs MQTT for efficient event transport and utilizes Elixir’s lightweight concurrency model for distributed processing. Robustness and scalability are further ensured through the EMQX broker. We demonstrate the effectiveness of our approach through two smart transportation case studies: a traffic light system for dynamically adjusting signal timing, and a cab dispatch prototype designed for high concurrency and real-time data processing. Evaluations on an Apple M1 chip reveal consistently low latency responses below 5 ms and efficient multicore utilization under load. These findings showcase the system’s robust throughput and multicore programming capabilities, confirming its suitability for real-time, distributed edge computing applications in smart transportation. Therefore, our work suggests that integrating Elixir with an event-driven model represents a promising approach for developing scalable, responsive applications in edge computing. This opens avenues for further exploration and adoption of Elixir in addressing the evolving demands of edge-based smart transportation systems. Full article
(This article belongs to the Special Issue Edge Intelligence: Edge Computing for 5G and the Internet of Things)
Show Figures

Figure 1

17 pages, 1738 KiB  
Article
A Transferable Deep Learning Framework for Improving the Accuracy of Internet of Things Intrusion Detection
by Haedam Kim, Suhyun Park, Hyemin Hong, Jieun Park and Seongmin Kim
Future Internet 2024, 16(3), 80; https://doi.org/10.3390/fi16030080 - 28 Feb 2024
Cited by 1 | Viewed by 1277
Abstract
As the size of the IoT solutions and services market proliferates, industrial fields utilizing IoT devices are also diversifying. However, the proliferation of IoT devices, often intertwined with users’ personal information and privacy, has led to a continuous surge in attacks targeting these [...] Read more.
As the size of the IoT solutions and services market proliferates, industrial fields utilizing IoT devices are also diversifying. However, the proliferation of IoT devices, often intertwined with users’ personal information and privacy, has led to a continuous surge in attacks targeting these devices. However, conventional network-level intrusion detection systems with pre-defined rulesets are gradually losing their efficacy due to the heterogeneous environments of IoT ecosystems. To address such security concerns, researchers have utilized ML-based network-level intrusion detection techniques. Specifically, transfer learning has been dedicated to identifying unforeseen malicious traffic in IoT environments based on knowledge distillation from the rich source domain data sets. Nevertheless, since most IoT devices operate in heterogeneous but small-scale environments, such as home networks, selecting adequate source domains for learning proves challenging. This paper introduces a framework designed to tackle this issue. In instances where assessing an adequate data set through pre-learning using transfer learning is non-trivial, our proposed framework advocates the selection of a data set as the source domain for transfer learning. This selection process aims to determine the appropriateness of implementing transfer learning, offering the best practice in such scenarios. Our evaluation demonstrates that the proposed framework successfully chooses a fitting source domain data set, delivering the highest accuracy. Full article
Show Figures

Figure 1

20 pages, 3661 KiB  
Article
A Multi-Head LSTM Architecture for Bankruptcy Prediction with Time Series Accounting Data
by Mattia Pellegrino, Gianfranco Lombardo, George Adosoglou, Stefano Cagnoni, Panos M. Pardalos and Agostino Poggi
Future Internet 2024, 16(3), 79; https://doi.org/10.3390/fi16030079 - 27 Feb 2024
Viewed by 1040
Abstract
With the recent advances in machine learning (ML), several models have been successfully applied to financial and accounting data to predict the likelihood of companies’ bankruptcy. However, time series have received little attention in the literature, with a lack of studies on the [...] Read more.
With the recent advances in machine learning (ML), several models have been successfully applied to financial and accounting data to predict the likelihood of companies’ bankruptcy. However, time series have received little attention in the literature, with a lack of studies on the application of deep learning sequence models such as Recurrent Neural Networks (RNNs) and the recent Attention-based models in general. In this research work, we investigated the application of Long Short-Term Memory (LSTM) networks to exploit time series of accounting data for bankruptcy prediction. The main contributions of our work are the following: (a) We proposed a multi-head LSTM that models each financial variable in a time window independently and compared it with a single-input LSTM and other traditional ML models. The multi-head LSTM outperformed all the other models. (b) We identified the optimal time series length for bankruptcy prediction to be equal to 4 years of accounting data. (c) We made public the dataset we used for the experiments which includes data from 8262 different public companies in the American stock market generated in the period between 1999 and 2018. Furthermore, we proved the efficacy of the multi-head LSTM model in terms of fewer false positives and the better division of the two classes. Full article
(This article belongs to the Special Issue Machine Learning Perspective in the Convolutional Neural Network Era)
Show Figures

Figure 1

46 pages, 691 KiB  
Article
Deterministic K-Identification for Future Communication Networks: The Binary Symmetric Channel Results
by Mohammad Javad Salariseddigh, Ons Dabbabi, Christian Deppe and Holger Boche
Future Internet 2024, 16(3), 78; https://doi.org/10.3390/fi16030078 - 26 Feb 2024
Viewed by 832
Abstract
Numerous applications of the Internet of Things (IoT) feature an event recognition behavior where the established Shannon capacity is not authorized to be the central performance measure. Instead, the identification capacity for such systems is considered to be an alternative metric, and has [...] Read more.
Numerous applications of the Internet of Things (IoT) feature an event recognition behavior where the established Shannon capacity is not authorized to be the central performance measure. Instead, the identification capacity for such systems is considered to be an alternative metric, and has been developed in the literature. In this paper, we develop deterministic K-identification (DKI) for the binary symmetric channel (BSC) with and without a Hamming weight constraint imposed on the codewords. This channel may be of use for IoT in the context of smart system technologies, where sophisticated communication models can be reduced to a BSC for the aim of studying basic information theoretical properties. We derive inner and outer bounds on the DKI capacity of the BSC when the size of the goal message set K may grow in the codeword length n. As a major observation, we find that, for deterministic encoding, assuming that K grows exponentially in n, i.e., K=2nκ, where κ is the identification goal rate, then the number of messages that can be accurately identified grows exponentially in n, i.e., 2nR, where R is the DKI coding rate. Furthermore, the established inner and outer bound regions reflects impact of the input constraint (Hamming weight) and the channel statistics, i.e., the cross-over probability. Full article
(This article belongs to the Special Issue Featured Papers in the Section Internet of Things)
Show Figures

Figure 1

25 pages, 3727 KiB  
Article
Enabling Vehicle-to-Vehicle Trust in Rural Areas: An Evaluation of a Pre-Signature Scheme for Infrastructure-Limited Environments
by Dimah Almani, Tim Muller, Xavier Carpent, Takahito Yoshizawa and Steven Furnell
Future Internet 2024, 16(3), 77; https://doi.org/10.3390/fi16030077 - 26 Feb 2024
Viewed by 1030
Abstract
This research investigates the deployment and effectiveness of the novel Pre-Signature scheme, developed to allow for up-to-date reputation being available in Vehicle-to-Vehicle (V2V) communications in rural landscapes, where the communications infrastructure is limited. We discuss how existing standards and specifications can be adjusted [...] Read more.
This research investigates the deployment and effectiveness of the novel Pre-Signature scheme, developed to allow for up-to-date reputation being available in Vehicle-to-Vehicle (V2V) communications in rural landscapes, where the communications infrastructure is limited. We discuss how existing standards and specifications can be adjusted to incorporate the Pre-Signature scheme to disseminate reputation. Addressing the unique challenges posed by sparse or irregular Roadside Units (RSUs) coverage in these areas, the study investigates the implications of such environmental factors on the integrity and reliability of V2V communication networks. Using the widely used SUMO traffic simulation tool, we create and simulate real-world rural scenarios. We have conducted an in-depth performance evaluation of the Pre-Signature scheme under the typical infrastructural limitations encountered in rural scenarios. Our findings demonstrate the scheme’s usefulness in scenarios with variable or constrained RSUs access. Furthermore, the relationships between the three variables, communication range, amount of RSUs, and degree of home-to-vehicle connectivity overnight, are studied, offering an exhaustive analysis of the determinants influencing V2V communication efficiency in rural contexts. The important findings are (1) that access to accurate Reputation Values increases with all three variables and (2) the necessity of Pre-Signatures decreases if the amount and range of RSUs increase to high numbers. Together, these findings imply that areas with a low degree of adoption of RSUs (typically rural areas) benefit the most from our approach. Full article
(This article belongs to the Section Cybersecurity)
Show Figures

Figure 1

Previous Issue
Next Issue
Back to TopTop