Next Issue
Volume 15, October
Previous Issue
Volume 15, August
 
 

Future Internet, Volume 15, Issue 9 (September 2023) – 36 articles

Cover Story (view full-size image): Digital twins enhance the intelligent manufacturing process by replicating system behavior, offering real-time analysis, and predictive maintenance during operations. Due to modeling complexity, most of them use purely data-driven approaches without considering physical phenomena. To attenuate modeling error and increase modeling interpretability, this paper proposes a novel approach, using neural ODEs and physical dynamical equations for cooling fan system digital twins. This hybrid modeling approach achieves accurate prediction with fewer parameters and is robust against unexpected input patterns. This work demonstrates great potential for adapting neural networks into a physical or mathematical framework, enabling more intelligent and robust manufacturing processes in the future. View this paper
  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Reader to open them.
Order results
Result details
Section
Select all
Export citation of selected articles as:
23 pages, 5452 KiB  
Article
Evaluation of Blockchain Networks’ Scalability Limitations in Low-Powered Internet of Things (IoT) Sensor Networks
by Kithmini Godewatte Arachchige, Philip Branch and Jason But
Future Internet 2023, 15(9), 317; https://doi.org/10.3390/fi15090317 - 21 Sep 2023
Viewed by 1809
Abstract
With the development of Internet of Things (IoT) technologies, industries such as healthcare have started using low-powered sensor-based devices. Because IoT devices are typically low-powered, they are susceptible to cyber intrusions. As an emerging information security solution, blockchain technology has considerable potential for [...] Read more.
With the development of Internet of Things (IoT) technologies, industries such as healthcare have started using low-powered sensor-based devices. Because IoT devices are typically low-powered, they are susceptible to cyber intrusions. As an emerging information security solution, blockchain technology has considerable potential for protecting low-powered IoT end devices. Blockchain technology provides promising security features such as cryptography, hash functions, time stamps, and a distributed ledger function. Therefore, blockchain technology can be a robust security technology for securing IoT low-powered devices. However, the integration of blockchain and IoT technologies raises a number of research questions. Scalability is one of the most significant. Blockchain’ scalability of low-powered sensor networks needs to be evaluated to identify the practical application of both technologies in low-powered sensor networks. In this paper, we analyse the scalability limitations of three commonly used blockchain algorithms running on low-powered single-board computers communicating in a wireless sensor network. We assess the scalability limitations of three blockchain networks as we increase the number of nodes. Our analysis shows considerable scalability variations between three blockchain networks. The results indicate that some blockchain networks can have over 800 ms network latency and some blockchain networks may use a bandwidth over 1600 Kbps. This work will contribute to developing efficient blockchain-based IoT sensor networks. Full article
Show Figures

Figure 1

21 pages, 11273 KiB  
Article
Technical, Qualitative and Energy Analysis of Wireless Control Modules for Distributed Smart Home Systems
by Andrzej Ożadowicz
Future Internet 2023, 15(9), 316; https://doi.org/10.3390/fi15090316 - 20 Sep 2023
Cited by 1 | Viewed by 1307
Abstract
Distributed smart home systems using wireless communication are increasingly installed and operated in households. Their popularity is due to the ease of installation and configuration. This paper presents a comprehensive technical, quality, and energy analysis of several popular smart home modules. Specifically, it [...] Read more.
Distributed smart home systems using wireless communication are increasingly installed and operated in households. Their popularity is due to the ease of installation and configuration. This paper presents a comprehensive technical, quality, and energy analysis of several popular smart home modules. Specifically, it focuses on verifying their power consumption levels, both in standby and active mode, to assess their impact on the energy efficiency of building installations. This is an important aspect in the context of their continuous operation, as well as in relation to the relatively lower power of loads popular in buildings, such as LED lighting. The author presents the results of measurements carried out for seven different smart home modules controlling seven different types of loads. The analysis of the results shows a significant share of home automation modules in the energy balance; in particular, the appearance of reactive power consumption due to the installation of smart home modules is noteworthy. Bearing in mind all the threads of the analysis and discussion of the results of measurement experiments, a short SWOT analysis is presented, with an indication of important issues in the context of further development of smart systems and the Internet of Things with wireless communication interfaces, dedicated to home and building applications. Full article
(This article belongs to the Special Issue Artificial Intelligence and Blockchain Technology for Smart Cities)
Show Figures

Figure 1

19 pages, 1182 KiB  
Article
Force-Based Self-Organizing MANET/FANET with a UAV Swarm
by Fabrice Saffre, Hanno Hildmann and Antti Anttonen
Future Internet 2023, 15(9), 315; https://doi.org/10.3390/fi15090315 - 19 Sep 2023
Cited by 1 | Viewed by 1327
Abstract
This paper introduces a novel distributed algorithm designed to optimize the deployment of access points within Mobile Ad Hoc Networks (MANETs) for better service quality in infrastructure-less environments. The algorithm operates based on local, independent execution by each network node, thus ensuring a [...] Read more.
This paper introduces a novel distributed algorithm designed to optimize the deployment of access points within Mobile Ad Hoc Networks (MANETs) for better service quality in infrastructure-less environments. The algorithm operates based on local, independent execution by each network node, thus ensuring a high degree of scalability and adaptability to changing network conditions. The primary focus is to match the spatial distribution of access points with the distribution of client devices while maintaining strong connectivity to the network root. Using autonomous decision-making and choreographed path-planning, this algorithm bridges the gap between demand-responsive network service provision and the maintenance of crucial network connectivity links. The assessment of the performance of this approach is motivated by using numerical results generated by simulations. Full article
Show Figures

Figure 1

28 pages, 10613 KiB  
Article
Analysis of Program Representations Based on Abstract Syntax Trees and Higher-Order Markov Chains for Source Code Classification Task
by Artyom V. Gorchakov, Liliya A. Demidova and Peter N. Sovietov
Future Internet 2023, 15(9), 314; https://doi.org/10.3390/fi15090314 - 18 Sep 2023
Cited by 2 | Viewed by 1307
Abstract
In this paper we consider the research and development of classifiers that are trained to predict the task solved by source code. Possible applications of such task detection algorithms include method name prediction, hardware–software partitioning, programming standard violation detection, and semantic code duplication [...] Read more.
In this paper we consider the research and development of classifiers that are trained to predict the task solved by source code. Possible applications of such task detection algorithms include method name prediction, hardware–software partitioning, programming standard violation detection, and semantic code duplication search. We provide the comparative analysis of modern approaches to source code transformation into vector-based representations that extend the variety of classification and clustering algorithms that can be used for intelligent source code analysis. These approaches include word2vec, code2vec, first-order and second-order Markov chains constructed from abstract syntax trees (AST), histograms of assembly language instruction opcodes, and histograms of AST node types. The vectors obtained with the forementioned approaches are then used to train such classification algorithms as k-nearest neighbor (KNN), support vector machine (SVM), random forest (RF), and multilayer perceptron (MLP). The obtained results show that the use of program vectors based on first-order AST-based Markov chains with an RF-based classifier leads to the highest accuracy, precision, recall, and F1 score. Increasing the order of Markov chains considerably increases the dimensionality of a vector, without any improvements in classifier quality, so we assume that first-order Markov chains are best suitable for real world applications. Additionally, the experimental study shows that first-order AST-based Markov chains are least sensitive to the used classification algorithm. Full article
(This article belongs to the Section Big Data and Augmented Intelligence)
Show Figures

Figure 1

23 pages, 1006 KiB  
Article
Proof of Flow: A Design Pattern for the Green Energy Market
by Valerio Mandarino, Giuseppe Pappalardo and Emiliano Tramontana
Future Internet 2023, 15(9), 313; https://doi.org/10.3390/fi15090313 - 17 Sep 2023
Viewed by 1078
Abstract
The increased penetration of Distributed Energy Resources (DERs) in electricity markets has given rise to a new category of energy players, called Aggregators, whose role is to ensure fair remuneration for energy supplied by DERs, and support the smooth feeding of the intermittent [...] Read more.
The increased penetration of Distributed Energy Resources (DERs) in electricity markets has given rise to a new category of energy players, called Aggregators, whose role is to ensure fair remuneration for energy supplied by DERs, and support the smooth feeding of the intermittent energy produced into the distribution network. This paper presents a software solution, described as a design pattern, that governs the interaction between an Aggregator and DERs, leveraging blockchain technology to achieve a higher degree of decentralization, data integrity and security, through a properly designed, blockchain-based, smart contract. Thus, the proposed solution reduces the reliance on intermediaries acting as authorities, while affording transparency, efficiency and trust to the energy exchange process. Thanks to the underlying blockchain properties, generated events are easily observable and cannot be forged or altered. However, blockchain technology has inherent drawbacks, i.e., mainly the cost of storage and execution, hence our solution provides additional strategies for limiting blockchain usage, without undermining its strengths. Moreover, the design of our smart contract takes care of orchestrating the players, and copes with their potential mutual disagreements, which could arise from different measures of energy, providing an automatic decision process to resolve such disputes. The overall approach results in lower fees for running smart contacts supporting energy players and in a greater degree of fairness assurance. Full article
(This article belongs to the Special Issue Blockchain and Web 3.0: Applications, Challenges and Future Trends)
Show Figures

Figure 1

2 pages, 168 KiB  
Editorial
Editorial for the Special Issue on “Software Engineering and Data Science”, Volume II
by Davide Tosi
Future Internet 2023, 15(9), 312; https://doi.org/10.3390/fi15090312 - 16 Sep 2023
Viewed by 863
Abstract
The Special Issue “Software Engineering and Data Science, Volume II” is the natural continuation of its greatly successful predecessor, Volume I [...] Full article
(This article belongs to the Special Issue Software Engineering and Data Science II)
27 pages, 600 KiB  
Review
Automatic Short Text Summarization Techniques in Social Media Platforms
by Fahd A. Ghanem, M. C. Padma and Ramez Alkhatib
Future Internet 2023, 15(9), 311; https://doi.org/10.3390/fi15090311 - 13 Sep 2023
Cited by 1 | Viewed by 1980
Abstract
The rapid expansion of social media platforms has resulted in an unprecedented surge of short text content being generated on a daily basis. Extracting valuable insights and patterns from this vast volume of textual data necessitates specialized techniques that can effectively condense information [...] Read more.
The rapid expansion of social media platforms has resulted in an unprecedented surge of short text content being generated on a daily basis. Extracting valuable insights and patterns from this vast volume of textual data necessitates specialized techniques that can effectively condense information while preserving its core essence. In response to this challenge, automatic short text summarization (ASTS) techniques have emerged as a compelling solution, gaining significant importance in their development. This paper delves into the domain of summarizing short text on social media, exploring various types of short text and the associated challenges they present. It also investigates the approaches employed to generate concise and meaningful summaries. By providing a survey of the latest methods and potential avenues for future research, this paper contributes to the advancement of ASTS in the ever-evolving landscape of social media communication. Full article
Show Figures

Figure 1

25 pages, 813 KiB  
Review
Exploring Homomorphic Encryption and Differential Privacy Techniques towards Secure Federated Learning Paradigm
by Rezak Aziz, Soumya Banerjee, Samia Bouzefrane and Thinh Le Vinh
Future Internet 2023, 15(9), 310; https://doi.org/10.3390/fi15090310 - 13 Sep 2023
Cited by 4 | Viewed by 3239
Abstract
The trend of the next generation of the internet has already been scrutinized by top analytics enterprises. According to Gartner investigations, it is predicted that, by 2024, 75% of the global population will have their personal data covered under privacy regulations. This alarming [...] Read more.
The trend of the next generation of the internet has already been scrutinized by top analytics enterprises. According to Gartner investigations, it is predicted that, by 2024, 75% of the global population will have their personal data covered under privacy regulations. This alarming statistic necessitates the orchestration of several security components to address the enormous challenges posed by federated and distributed learning environments. Federated learning (FL) is a promising technique that allows multiple parties to collaboratively train a model without sharing their data. However, even though FL is seen as a privacy-preserving distributed machine learning method, recent works have demonstrated that FL is vulnerable to some privacy attacks. Homomorphic encryption (HE) and differential privacy (DP) are two promising techniques that can be used to address these privacy concerns. HE allows secure computations on encrypted data, while DP provides strong privacy guarantees by adding noise to the data. This paper first presents consistent attacks on privacy in federated learning and then provides an overview of HE and DP techniques for secure federated learning in next-generation internet applications. It discusses the strengths and weaknesses of these techniques in different settings as described in the literature, with a particular focus on the trade-off between privacy and convergence, as well as the computation overheads involved. The objective of this paper is to analyze the challenges associated with each technique and identify potential opportunities and solutions for designing a more robust, privacy-preserving federated learning framework. Full article
Show Figures

Figure 1

17 pages, 813 KiB  
Article
On Evaluating IoT Data Trust via Machine Learning
by Timothy Tadj, Reza Arablouei and Volkan Dedeoglu
Future Internet 2023, 15(9), 309; https://doi.org/10.3390/fi15090309 - 12 Sep 2023
Viewed by 1180
Abstract
Data trust in IoT is crucial for safeguarding privacy, security, reliable decision-making, user acceptance, and complying with regulations. Various approaches based on supervised or unsupervised machine learning (ML) have recently been proposed for evaluating IoT data trust. However, assessing their real-world efficacy is [...] Read more.
Data trust in IoT is crucial for safeguarding privacy, security, reliable decision-making, user acceptance, and complying with regulations. Various approaches based on supervised or unsupervised machine learning (ML) have recently been proposed for evaluating IoT data trust. However, assessing their real-world efficacy is hard mainly due to the lack of related publicly available datasets that can be used for benchmarking. Since obtaining such datasets is challenging, we propose a data synthesis method, called random walk infilling (RWI), to augment IoT time-series datasets by synthesizing untrustworthy data from existing trustworthy data. Thus, RWI enables us to create labeled datasets that can be used to develop and validate ML models for IoT data trust evaluation. We also extract new features from IoT time-series sensor data that effectively capture its autocorrelation as well as its cross-correlation with the data of the neighboring (peer) sensors. These features can be used to learn ML models for recognizing the trustworthiness of IoT sensor data. Equipped with our synthesized ground-truth-labeled datasets and informative correlation-based features, we conduct extensive experiments to critically examine various approaches to evaluating IoT data trust via ML. The results reveal that commonly used ML-based approaches to IoT data trust evaluation, which rely on unsupervised cluster analysis to assign trust labels to unlabeled data, perform poorly. This poor performance is due to the underlying assumption that clustering provides reliable labels for data trust, which is found to be untenable. The results also indicate that ML models, when trained on datasets augmented via RWI and using the proposed features, generalize well to unseen data and surpass existing related approaches. Moreover, we observe that a semi-supervised ML approach that requires only about 10% of the data labeled offers competitive performance while being practically more appealing compared to the fully supervised approaches. The related Python code and data are available online. Full article
(This article belongs to the Special Issue Information and Future Internet Security, Trust and Privacy II)
Show Figures

Figure 1

22 pages, 8329 KiB  
Article
Prototyping a Hyperledger Fabric-Based Security Architecture for IoMT-Based Health Monitoring Systems
by Filippos Pelekoudas-Oikonomou, José C. Ribeiro, Georgios Mantas, Georgia Sakellari and Jonathan Gonzalez
Future Internet 2023, 15(9), 308; https://doi.org/10.3390/fi15090308 - 11 Sep 2023
Cited by 2 | Viewed by 1377
Abstract
The Internet of Medical Things (IoMT) has risen significantly in recent years and has provided better quality of life by enabling IoMT-based health monitoring systems. Despite that fact, innovative security mechanisms are required to meet the security concerns of such systems effectively and [...] Read more.
The Internet of Medical Things (IoMT) has risen significantly in recent years and has provided better quality of life by enabling IoMT-based health monitoring systems. Despite that fact, innovative security mechanisms are required to meet the security concerns of such systems effectively and efficiently. Additionally, the industry and the research community have anticipated that blockchain technology will be a disruptive technology that will be able to be integrated into innovative security solutions for IoMT networks since it has the potential to play a big role in: (a) enabling secure data transmission, (b) ensuring IoMT device security, and (c) enabling tamper-proof data storage. Therefore, the purpose of this research work is to design a novel lightweight blockchain-based security architecture for IoMT-based health monitoring systems leveraging the features of the Hyperledger Fabric (HF) Platform, its utilities. and its lightweight blockchain nature in order to: (i) ensure entity authentication, (ii) ensure data confidentiality, and (iii) enable a more energy-efficient blockchain-based security architecture for IoMT-based health monitoring systems while considering the limited resources of IoMT gateways. While security mechanisms for IoT utilizing HF do exist, to the best of our knowledge there is no specific HF-based architecture for IoMT-based health monitoring systems. Full article
(This article belongs to the Special Issue The Future Internet of Medical Things II)
Show Figures

Figure 1

27 pages, 2600 KiB  
Article
FL-LoRaMAC: A Novel Framework for Enabling On-Device Learning for LoRa-Based IoT Applications
by Shobhit Aggarwal and Asis Nasipuri
Future Internet 2023, 15(9), 307; https://doi.org/10.3390/fi15090307 - 10 Sep 2023
Viewed by 1714
Abstract
The Internet of Things (IoT) enables us to gain access to a wide range of data from the physical world that can be analyzed for deriving critical state information. In this regard, machine learning (ML) is a valuable tool that can be used [...] Read more.
The Internet of Things (IoT) enables us to gain access to a wide range of data from the physical world that can be analyzed for deriving critical state information. In this regard, machine learning (ML) is a valuable tool that can be used to develop models based on observed physical data, leading to efficient analytical decisions, including anomaly detection. In this work, we address some key challenges for applying ML in IoT applications that include maintaining privacy considerations of user data that are needed for developing ML models and minimizing the communication cost for transmitting the data over the IoT network. We consider a representative application of the anomaly detection of ECG signals that are obtained from a set of low-cost wearable sensors and transmitted to a central server using LoRaWAN, which is a popular and emerging low-power wide-area network (LPWAN) technology. We present a novel framework utilizing federated learning (FL) to preserve data privacy and appropriate features for uplink and downlink communications between the end devices and the gateway to optimize the communication cost. Performance results obtained from computer simulations demonstrate that the proposed framework leads to a 98% reduction in the volume of data that is required to achieve the same level of performance as in traditional centralized ML. Full article
(This article belongs to the Special Issue Applications of Wireless Sensor Networks and Internet of Things)
Show Figures

Figure 1

19 pages, 374 KiB  
Article
An Automatic Transformer from Sequential to Parallel Java Code
by Alessandro Midolo and Emiliano Tramontana
Future Internet 2023, 15(9), 306; https://doi.org/10.3390/fi15090306 - 08 Sep 2023
Viewed by 982
Abstract
Sequential programs can benefit from parallel execution to improve their performance. When developing a parallel application, several techniques are employed to achieve the desired behavior: identifying parts that can run in parallel, synchronizing access to shared data, tuning performance, etc. Admittedly, manually transforming [...] Read more.
Sequential programs can benefit from parallel execution to improve their performance. When developing a parallel application, several techniques are employed to achieve the desired behavior: identifying parts that can run in parallel, synchronizing access to shared data, tuning performance, etc. Admittedly, manually transforming a sequential application to make it parallel can be tedious due to the large number of lines of code to inspect, the possibility of errors arising from inaccurate data dependence analysis leading to unpredictable behavior, and inefficiencies when the workload between parallel threads is unbalanced. This paper proposes an automatic approach that analyzes Java source code to identify method calls that are suitable for parallel execution and transforms them so that they run in another thread. The approach is based on data dependence and control dependence analyses to determine the execution flow and data accessed. Based on the proposed method, a tool has been developed to enhance applications by incorporating parallelism, i.e., transforming suitable method calls to execute on parallel threads, and synchronizing data access where needed. The developed tool has been extensively tested to verify the accuracy of its analysis in finding parallel execution opportunities, the correctness of the source code alterations, and the resultant performance gain. Full article
(This article belongs to the Section Smart System Infrastructure and Applications)
Show Figures

Figure 1

17 pages, 1136 KiB  
Article
Entering the Metaverse from the JVM: The State of the Art, Challenges, and Research Areas of JVM-Based Web 3.0 Tools and Libraries
by Vlad Bucur and Liviu-Cristian Miclea
Future Internet 2023, 15(9), 305; https://doi.org/10.3390/fi15090305 - 07 Sep 2023
Cited by 1 | Viewed by 1075
Abstract
Web 3.0 is the basis on which the proposed metaverse, a seamless virtual world enabled by computers and interconnected devices, hopes to interact with its users, but beyond the high-level project overview of what Web 3.0 applications try to achieve, the implementation is [...] Read more.
Web 3.0 is the basis on which the proposed metaverse, a seamless virtual world enabled by computers and interconnected devices, hopes to interact with its users, but beyond the high-level project overview of what Web 3.0 applications try to achieve, the implementation is still down to low-level coding details. This article aims to analyze the low-level implementations of key components of Web 3.0 using a variety of frameworks and tools as well as several JVM-based languages. This paper breaks down the low-level implementation of smart contracts and semantic web principles using three frameworks, Corda and Ethereum for smart contracts and Jeda for semantic web, using both Scala and Java as implementing languages all while highlighting differences and similarities between the frameworks used. Full article
Show Figures

Figure 1

21 pages, 1310 KiB  
Article
Hospital Readmission and Length-of-Stay Prediction Using an Optimized Hybrid Deep Model
by Alireza Tavakolian, Alireza Rezaee, Farshid Hajati and Shahadat Uddin
Future Internet 2023, 15(9), 304; https://doi.org/10.3390/fi15090304 - 06 Sep 2023
Viewed by 1562
Abstract
Hospital readmission and length-of-stay predictions provide information on how to manage hospital bed capacity and the number of required staff, especially during pandemics. We present a hybrid deep model called the Genetic Algorithm-Optimized Convolutional Neural Network (GAOCNN), with a unique preprocessing method to [...] Read more.
Hospital readmission and length-of-stay predictions provide information on how to manage hospital bed capacity and the number of required staff, especially during pandemics. We present a hybrid deep model called the Genetic Algorithm-Optimized Convolutional Neural Network (GAOCNN), with a unique preprocessing method to predict hospital readmission and the length of stay required for patients of various conditions. GAOCNN uses one-dimensional convolutional layers to predict hospital readmission and the length of stay. The parameters of the layers are optimized via a genetic algorithm. To show the performance of the proposed model in patients with various conditions, we evaluate the model under three healthcare datasets: the Diabetes 130-US hospitals dataset, the COVID-19 dataset, and the MIMIC-III dataset. The diabetes 130-US hospitals dataset has information on both readmission and the length of stay, while the COVID-19 and MIMIC-III datasets just include information on the length of stay. Experimental results show that the proposed model’s accuracy for hospital readmission was 97.2% for diabetic patients. Furthermore, the accuracy of the length-of-stay prediction was 89%, 99.4%, and 94.1% for the diabetic, COVID-19, and ICU patients, respectively. These results confirm the superiority of the proposed model compared to existing methods. Our findings offer a platform for managing the healthcare funds and resources for patients with various diseases. Full article
(This article belongs to the Special Issue Internet of Things (IoT) for Smart Living and Public Health)
Show Figures

Figure 1

19 pages, 12249 KiB  
Article
Internet of Robotic Things (IoRT) and Metaheuristic Optimization Techniques Applied for Wheel-Legged Robot
by Mateusz Malarczyk, Grzegorz Kaczmarczyk, Jaroslaw Szrek and Marcin Kaminski
Future Internet 2023, 15(9), 303; https://doi.org/10.3390/fi15090303 - 06 Sep 2023
Viewed by 1123
Abstract
This paper presents the operation of a remotely controlled, wheel-legged robot. The developed Wi-Fi connection framework is established on a popular ARM microcontroller board. The implementation provides a low-cost solution that is in congruence with the newest industrial standards. Additionally, the problem of [...] Read more.
This paper presents the operation of a remotely controlled, wheel-legged robot. The developed Wi-Fi connection framework is established on a popular ARM microcontroller board. The implementation provides a low-cost solution that is in congruence with the newest industrial standards. Additionally, the problem of limb structure and motor speed control is solved. The design process of the mechanical structure is enhanced by a nature-inspired metaheuristic optimization algorithm. An FOC-based BLDC motor speed control strategy is selected to guarantee dynamic operation of the drive. The paper provides both the theoretical considerations and the obtained prototype experimental results. Full article
(This article belongs to the Special Issue Internet of Things (IoT) for Smart Living and Public Health)
Show Figures

Figure 1

19 pages, 9235 KiB  
Article
A Hybrid Neural Ordinary Differential Equation Based Digital Twin Modeling and Online Diagnosis for an Industrial Cooling Fan
by Chao-Chung Peng and Yi-Ho Chen
Future Internet 2023, 15(9), 302; https://doi.org/10.3390/fi15090302 - 04 Sep 2023
Cited by 2 | Viewed by 1221
Abstract
Digital twins can reflect the dynamical behavior of the identified system, enabling self-diagnosis and prediction in the digital world to optimize the intelligent manufacturing process. One of the key benefits of digital twins is the ability to provide real-time data analysis during operation, [...] Read more.
Digital twins can reflect the dynamical behavior of the identified system, enabling self-diagnosis and prediction in the digital world to optimize the intelligent manufacturing process. One of the key benefits of digital twins is the ability to provide real-time data analysis during operation, which can monitor the condition of the system and prognose the failure. This allows manufacturers to resolve the problem before it happens. However, most digital twins are constructed using discrete-time models, which are not able to describe the dynamics of the system across different sampling frequencies. In addition, the high computational complexity due to significant memory storage and large model sizes makes digital twins challenging for online diagnosis. To overcome these issues, this paper proposes a novel structure for creating the digital twins of cooling fan systems by combining with neural ordinary differential equations and physical dynamical differential equations. Evaluated using the simulation data, the proposed structure not only shows accurate modeling results compared to other digital twins methods but also requires fewer parameters and smaller model sizes. The proposed approach has also been demonstrated using experimental data and is robust in terms of measurement noise, and it has proven to be an effective solution for online diagnosis in the intelligent manufacturing process. Full article
(This article belongs to the Special Issue Digital Twins in Intelligent Manufacturing)
Show Figures

Figure 1

24 pages, 1339 KiB  
Article
Wireless Energy Harvesting for Internet-of-Things Devices Using Directional Antennas
by Hsiao-Ching Chang, Hsing-Tsung Lin and Pi-Chung Wang
Future Internet 2023, 15(9), 301; https://doi.org/10.3390/fi15090301 - 03 Sep 2023
Cited by 1 | Viewed by 1733
Abstract
With the rapid development of the Internet of Things, the number of wireless devices is increasing rapidly. Because of the limited battery capacity, these devices may suffer from the issue of power depletion. Radio frequency (RF) energy harvesting technology can wirelessly charge devices [...] Read more.
With the rapid development of the Internet of Things, the number of wireless devices is increasing rapidly. Because of the limited battery capacity, these devices may suffer from the issue of power depletion. Radio frequency (RF) energy harvesting technology can wirelessly charge devices to prolong their lifespan. With the technology of beamforming, the beams generated by an antenna array can select the direction for wireless charging. Although a good charging-time schedule should be short, energy efficiency should also be considered. In this work, we propose two algorithms to optimize the time consumption for charging devices. We first present a greedy algorithm to minimize the total charging time. Then, a differential evolution (DE) algorithm is proposed to minimize the energy overflow and improve energy efficiency. The DE algorithm can also gradually increase fully charged devices. The experimental results show that both the proposed greedy and DE algorithms can find a schedule of a short charging time with the lowest energy overflow. The DE algorithm can further improve the performance of data transmission to promote the feasibility of potential wireless sensing and charging applications by reducing the number of fully charged devices at the same time. Full article
Show Figures

Figure 1

18 pages, 3680 KiB  
Article
Application of ChatGPT-Based Digital Human in Animation Creation
by Chong Lan, Yongsheng Wang, Chengze Wang, Shirong Song and Zheng Gong
Future Internet 2023, 15(9), 300; https://doi.org/10.3390/fi15090300 - 02 Sep 2023
Cited by 2 | Viewed by 3515
Abstract
Traditional 3D animation creation involves a process of motion acquisition, dubbing, and mouth movement data binding for each character. To streamline animation creation, we propose combining artificial intelligence (AI) with a motion capture system. This integration aims to reduce the time, workload, and [...] Read more.
Traditional 3D animation creation involves a process of motion acquisition, dubbing, and mouth movement data binding for each character. To streamline animation creation, we propose combining artificial intelligence (AI) with a motion capture system. This integration aims to reduce the time, workload, and cost associated with animation creation. By utilizing AI and natural language processing, the characters can engage in independent learning, generating their own responses and interactions, thus moving away from the traditional method of creating digital characters with pre-defined behaviors. In this paper, we present an approach that employs a digital person’s animation environment. We utilized Unity plug-ins to drive the character’s mouth Blendshape, synchronize the character’s voice and mouth movements in Unity, and connect the digital person to an AI system. This integration enables AI-driven language interactions within animation production. Through experimentation, we evaluated the correctness of the natural language interaction of the digital human in the animated scene, the real-time synchronization of mouth movements, the potential for singularity in guiding users during digital human animation creation, and its ability to guide user interactions through its own thought process. Full article
(This article belongs to the Topic AI Chatbots: Threat or Opportunity?)
Show Figures

Graphical abstract

18 pages, 442 KiB  
Article
Precoding for RIS-Assisted Multi-User MIMO-DQSM Transmission Systems
by Francisco R. Castillo-Soria, J. Alberto Del Puerto-Flores, Cesar A. Azurdia-Meza, Vinoth Babu Kumaravelu, Jorge Simón and Carlos A. Gutierrez
Future Internet 2023, 15(9), 299; https://doi.org/10.3390/fi15090299 - 02 Sep 2023
Cited by 1 | Viewed by 1170
Abstract
This paper presents two precoding techniques for a reconfigurable intelligent surface (RIS)-assisted multi-user (MU) multiple-input multiple-output (MIMO) double quadrature spatial modulation (DQSM) downlink transmission system. Instead of being applied at the remote RIS, the phase shift vector is applied at the base station [...] Read more.
This paper presents two precoding techniques for a reconfigurable intelligent surface (RIS)-assisted multi-user (MU) multiple-input multiple-output (MIMO) double quadrature spatial modulation (DQSM) downlink transmission system. Instead of being applied at the remote RIS, the phase shift vector is applied at the base station (BS) by using a double precoding stage. Results show that the proposed RIS-MU-MIMO-DQSM system has gains of up to 17 dB in terms of bit error rate (BER) and a reduction in detection complexity of 51% when compared with the conventional MU-MIMO system based on quadrature amplitude modulation (QAM). Compared with a similar system based on amplify and forward (AF) relay-assisted technique, the proposed system has a gain of up to 18 dB in terms of BER under the same conditions and parameters. Full article
Show Figures

Figure 1

20 pages, 1231 KiB  
Article
Intelligent Unsupervised Network Traffic Classification Method Using Adversarial Training and Deep Clustering for Secure Internet of Things
by Weijie Zhang, Lanping Zhang, Xixi Zhang, Yu Wang, Pengfei Liu and Guan Gui
Future Internet 2023, 15(9), 298; https://doi.org/10.3390/fi15090298 - 01 Sep 2023
Viewed by 1345
Abstract
Network traffic classification (NTC) has attracted great attention in many applications such as secure communications, intrusion detection systems. The existing NTC methods based on supervised learning rely on sufficient labeled datasets in the training phase, but for most traffic datasets, it is difficult [...] Read more.
Network traffic classification (NTC) has attracted great attention in many applications such as secure communications, intrusion detection systems. The existing NTC methods based on supervised learning rely on sufficient labeled datasets in the training phase, but for most traffic datasets, it is difficult to obtain label information in practical applications. Although unsupervised learning does not rely on labels, its classification accuracy is not high, and the number of data classes is difficult to determine. This paper proposes an unsupervised NTC method based on adversarial training and deep clustering with improved network traffic classification (NTC) and lower computational complexity in comparison with the traditional clustering algorithms. Here, the training process does not require data labels, which greatly reduce the computational complexity of the network traffic classification through pretraining. In the pretraining stage, an autoencoder (AE) is used to reduce the dimension of features and reduce the complexity of the initial high-dimensional network traffic data features. Moreover, we employ the adversarial training model and a deep clustering structure to further optimize the extracted features. The experimental results show that our proposed method has robust performance, with a multiclassification accuracy of 92.2%, which is suitable for classification with a large number of unlabeled data in actual application scenarios. This paper only focuses on breakthroughs in the algorithm stage, and future work can be focused on the deployment and adaptation in practical environments. Full article
(This article belongs to the Special Issue Information and Future Internet Security, Trust and Privacy II)
Show Figures

Figure 1

19 pages, 2089 KiB  
Article
Explainable Lightweight Block Attention Module Framework for Network-Based IoT Attack Detection
by Furkat Safarov, Mainak Basak, Rashid Nasimov, Akmalbek Abdusalomov and Young Im Cho
Future Internet 2023, 15(9), 297; https://doi.org/10.3390/fi15090297 - 01 Sep 2023
Cited by 1 | Viewed by 1096
Abstract
In the rapidly evolving landscape of internet usage, ensuring robust cybersecurity measures has become a paramount concern across diverse fields. Among the numerous cyber threats, denial of service (DoS) and distributed denial of service (DDoS) attacks pose significant risks, as they can render [...] Read more.
In the rapidly evolving landscape of internet usage, ensuring robust cybersecurity measures has become a paramount concern across diverse fields. Among the numerous cyber threats, denial of service (DoS) and distributed denial of service (DDoS) attacks pose significant risks, as they can render websites and servers inaccessible to their intended users. Conventional intrusion detection methods encounter substantial challenges in effectively identifying and mitigating these attacks due to their widespread nature, intricate patterns, and computational complexities. However, by harnessing the power of deep learning-based techniques, our proposed dense channel-spatial attention model exhibits exceptional accuracy in detecting and classifying DoS and DDoS attacks. The successful implementation of our proposed framework addresses the challenges posed by imbalanced data and exhibits its potential for real-world applications. By leveraging the dense channel-spatial attention mechanism, our model can precisely identify and classify DoS and DDoS attacks, bolstering the cybersecurity defenses of websites and servers. The high accuracy rates achieved across different datasets reinforce the robustness of our approach, underscoring its efficacy in enhancing intrusion detection capabilities. As a result, our framework holds promise in bolstering cybersecurity measures in real-world scenarios, contributing to the ongoing efforts to safeguard against cyber threats in an increasingly interconnected digital landscape. Comparative analysis with current intrusion detection methods reveals the superior performance of our model. We achieved accuracy rates of 99.38%, 99.26%, and 99.43% for Bot-IoT, CICIDS2017, and UNSW_NB15 datasets, respectively. These remarkable results demonstrate the capability of our approach to accurately detect and classify various types of DoS and DDoS assaults. By leveraging the inherent strengths of deep learning, such as pattern recognition and feature extraction, our model effectively overcomes the limitations of traditional methods, enhancing the accuracy and efficiency of intrusion detection systems. Full article
Show Figures

Figure 1

16 pages, 743 KiB  
Article
FREDY: Federated Resilience Enhanced with Differential Privacy
by Zacharias Anastasakis, Terpsichori-Helen Velivassaki, Artemis Voulkidis, Stavroula Bourou, Konstantinos Psychogyios, Dimitrios Skias and Theodore Zahariadis
Future Internet 2023, 15(9), 296; https://doi.org/10.3390/fi15090296 - 01 Sep 2023
Viewed by 1012
Abstract
Federated Learning is identified as a reliable technique for distributed training of ML models. Specifically, a set of dispersed nodes may collaborate through a federation in producing a jointly trained ML model without disclosing their data to each other. Each node performs local [...] Read more.
Federated Learning is identified as a reliable technique for distributed training of ML models. Specifically, a set of dispersed nodes may collaborate through a federation in producing a jointly trained ML model without disclosing their data to each other. Each node performs local model training and then shares its trained model weights with a server node, usually called Aggregator in federated learning, as it aggregates the trained weights and then sends them back to its clients for another round of local training. Despite the data protection and security that FL provides to each client, there are still well-studied attacks such as membership inference attacks that can detect potential vulnerabilities of the FL system and thus expose sensitive data. In this paper, in order to prevent this kind of attack and address private data leakage, we introduce FREDY, a differential private federated learning framework that enables knowledge transfer from private data. Particularly, our approach has a teachers–student scheme. Each teacher model is trained on sensitive, disjoint data in a federated manner, and the student model is trained on the most voted predictions of the teachers on public unlabeled data which are noisy aggregated in order to guarantee the privacy of each teacher’s sensitive data. Only the student model is publicly accessible as the teacher models contain sensitive information. We show that our proposed approach guarantees the privacy of sensitive data against model inference attacks while it combines the federated learning settings for the model training procedures. Full article
(This article belongs to the Special Issue Privacy and Security in Computing Continuum and Data-Driven Workflows)
Show Figures

Figure 1

22 pages, 3262 KiB  
Article
FLAME-VQA: A Fuzzy Logic-Based Model for High Frame Rate Video Quality Assessment
by Štefica Mrvelj and Marko Matulin
Future Internet 2023, 15(9), 295; https://doi.org/10.3390/fi15090295 - 01 Sep 2023
Viewed by 973
Abstract
In the quest to optimize user experience, network, and service, providers continually seek to deliver high-quality content tailored to individual preferences. However, predicting user perception of quality remains a challenging task, given the subjective nature of human perception and the plethora of technical [...] Read more.
In the quest to optimize user experience, network, and service, providers continually seek to deliver high-quality content tailored to individual preferences. However, predicting user perception of quality remains a challenging task, given the subjective nature of human perception and the plethora of technical attributes that contribute to the overall viewing experience. Thus, we introduce a Fuzzy Logic-bAsed ModEl for Video Quality Assessment (FLAME-VQA), leveraging the LIVE-YT-HFR database containing 480 video sequences and subjective ratings of their quality from 85 test subjects. The proposed model addresses the challenges of assessing user perception by capturing the intricacies of individual preferences and video attributes using fuzzy logic. It operates with four input parameters: video frame rate, compression rate, and spatio-temporal information. The Spearman Rank–Order Correlation Coefficient (SROCC) and Pearson Correlation Coefficient (PCC) show a high correlation between the output and the ground truth. For the training, test, and complete dataset, SROCC equals 0.8977, 0.8455, and 0.8961, respectively, while PCC equals 0.9096, 0.8632, and 0.9086, respectively. The model outperforms comparative models tested on the same dataset. Full article
(This article belongs to the Special Issue QoS in Wireless Sensor Network for IoT Applications)
Show Figures

Figure 1

2 pages, 146 KiB  
Editorial
Advances Techniques in Computer Vision and Multimedia
by Yang Wang
Future Internet 2023, 15(9), 294; https://doi.org/10.3390/fi15090294 - 01 Sep 2023
Viewed by 829
Abstract
Computer vision has experienced significant advancements and great success in areas closely related to human society, which aims to enable computer systems to automatically see, recognize, and understand the visual world by simulating the mechanism of human vision [...] Full article
(This article belongs to the Special Issue Advances Techniques in Computer Vision and Multimedia)
16 pages, 1050 KiB  
Review
Enhancing E-Learning with Blockchain: Characteristics, Projects, and Emerging Trends
by Mahmoud Bidry, Abdellah Ouaguid and Mohamed Hanine
Future Internet 2023, 15(9), 293; https://doi.org/10.3390/fi15090293 - 28 Aug 2023
Cited by 4 | Viewed by 2308
Abstract
Blockchain represents a decentralized and distributed ledger technology, ensuring transparent and secure transaction recording across networks. This innovative technology offers several benefits, including increased security, trust, and transparency, making it suitable for a wide range of applications. In the last few years, there [...] Read more.
Blockchain represents a decentralized and distributed ledger technology, ensuring transparent and secure transaction recording across networks. This innovative technology offers several benefits, including increased security, trust, and transparency, making it suitable for a wide range of applications. In the last few years, there has been a growing interest in investigating the potential of Blockchain technology to enhance diverse fields, such as e-learning. In this research, we undertook a systematic literature review to explore the potential of Blockchain technology in enhancing the e-learning domain. Our research focused on four main questions: (1) What potential characteristics of Blockchain can contribute to enhancing e-learning? (2) What are the existing Blockchain projects dedicated to e-learning? (3) What are the limitations of existing projects? (4) What are the future trends in Blockchain-related research that will impact e-learning? The results showed that Blockchain technology has several characteristics that could benefit e-learning. We also discussed immutability, transparency, decentralization, security, and traceability. We also identified several existing Blockchain projects dedicated to e-learning and discussed their potential to revolutionize learning by providing more transparency, security, and effectiveness. However, our research also revealed many limitations and challenges that could be addressed to achieve Blockchain technology’s potential in e-learning. Full article
(This article belongs to the Special Issue Future Prospects and Advancements in Blockchain Technology)
Show Figures

Figure 1

19 pages, 5975 KiB  
Article
Autism Screening in Toddlers and Adults Using Deep Learning and Fair AI Techniques
by Ishaani Priyadarshini
Future Internet 2023, 15(9), 292; https://doi.org/10.3390/fi15090292 - 28 Aug 2023
Cited by 3 | Viewed by 1725
Abstract
Autism spectrum disorder (ASD) has been associated with conditions like depression, anxiety, epilepsy, etc., due to its impact on an individual’s educational, social, and employment. Since diagnosis is challenging and there is no cure, the goal is to maximize an individual’s ability by [...] Read more.
Autism spectrum disorder (ASD) has been associated with conditions like depression, anxiety, epilepsy, etc., due to its impact on an individual’s educational, social, and employment. Since diagnosis is challenging and there is no cure, the goal is to maximize an individual’s ability by reducing the symptoms, and early diagnosis plays a role in improving behavior and language development. In this paper, an autism screening analysis for toddlers and adults has been performed using fair AI (feature engineering, SMOTE, optimizations, etc.) and deep learning methods. The analysis considers traditional deep learning methods like Multilayer Perceptron (MLP), Artificial Neural Networks (ANN), Convolutional Neural Networks (CNN), and Long Short-Term Memory (LSTM), and also proposes two hybrid deep learning models, i.e., CNN–LSTM with Particle Swarm Optimization (PSO), and a CNN model combined with Gated Recurrent Units (GRU–CNN). The models have been validated using multiple performance metrics, and the analysis confirms that the proposed models perform better than the traditional models. Full article
(This article belongs to the Special Issue Machine Learning Perspective in the Convolutional Neural Network Era)
Show Figures

Figure 1

24 pages, 14748 KiB  
Article
A Novel SDWSN-Based Testbed for IoT Smart Applications
by Duaa Zuhair Al-Hamid, Pejman A. Karegar and Peter Han Joo Chong
Future Internet 2023, 15(9), 291; https://doi.org/10.3390/fi15090291 - 28 Aug 2023
Cited by 1 | Viewed by 1054
Abstract
Wireless sensor network (WSN) environment monitoring and smart city applications present challenges for maintaining network connectivity when, for example, dynamic events occur. Such applications can benefit from recent technologies such as software-defined networks (SDNs) and network virtualization to support network flexibility and offer [...] Read more.
Wireless sensor network (WSN) environment monitoring and smart city applications present challenges for maintaining network connectivity when, for example, dynamic events occur. Such applications can benefit from recent technologies such as software-defined networks (SDNs) and network virtualization to support network flexibility and offer validation for a physical network. This paper aims to present a testbed-based, software-defined wireless sensor network (SDWSN) for IoT applications with a focus on promoting the approach of virtual network testing and analysis prior to physical network implementation to monitor and repair any network failures. Herein, physical network implementation employing hardware boards such as Texas Instruments CC2538 (TI CC2538) and TI CC1352R sensor nodes is presented and designed based on virtual WSN- based clustering for stationary and dynamic networks use cases. The key performance indicators such as evaluating node (such as a gateway node to the Internet) connection capability based on packet drop and energy consumption virtually and physically are discussed. According to the test findings, the proposed software-defined physical network benefited from “prior-to-implementation” analysis via virtualization, as the performance of both virtual and physical networks is comparable. Full article
(This article belongs to the Special Issue QoS in Wireless Sensor Network for IoT Applications)
Show Figures

Figure 1

15 pages, 455 KiB  
Article
Short-Term Mobile Network Traffic Forecasting Using Seasonal ARIMA and Holt-Winters Models
by Irina Kochetkova, Anna Kushchazli, Sofia Burtseva and Andrey Gorshenin
Future Internet 2023, 15(9), 290; https://doi.org/10.3390/fi15090290 - 28 Aug 2023
Cited by 1 | Viewed by 2085
Abstract
Fifth-generation (5G) networks require efficient radio resource management (RRM) which should dynamically adapt to the current network load and user needs. Monitoring and forecasting network performance requirements and metrics helps with this task. One of the parameters that highly influences radio resource management [...] Read more.
Fifth-generation (5G) networks require efficient radio resource management (RRM) which should dynamically adapt to the current network load and user needs. Monitoring and forecasting network performance requirements and metrics helps with this task. One of the parameters that highly influences radio resource management is the profile of user traffic generated by various 5G applications. Forecasting such mobile network profiles helps with numerous RRM tasks such as network slicing and load balancing. In this paper, we analyze a dataset from a mobile network operator in Portugal that contains information about volumes of traffic in download and upload directions in one-hour time slots. We apply two statistical models for forecasting download and upload traffic profiles, namely, seasonal autoregressive integrated moving average (SARIMA) and Holt-Winters models. We demonstrate that both models are suitable for forecasting mobile network traffic. Nevertheless, the SARIMA model is more appropriate for download traffic (e.g., MAPE [mean absolute percentage error] of 11.2% vs. 15% for Holt-Winters), while the Holt-Winters model is better suited for upload traffic (e.g., MAPE of 4.17% vs. 9.9% for SARIMA and Holt-Winters, respectively). Full article
(This article belongs to the Special Issue 5G Wireless Communication Networks II)
Show Figures

Figure 1

35 pages, 1386 KiB  
Article
3D Path Planning Algorithms in UAV-Enabled Communications Systems: A Mapping Study
by Jorge Carvajal-Rodriguez, Marco Morales and Christian Tipantuña
Future Internet 2023, 15(9), 289; https://doi.org/10.3390/fi15090289 - 27 Aug 2023
Viewed by 4924
Abstract
Unmanned Aerial Vehicles (UAVs) equipped with communication technologies have gained significant attention as a promising solution for providing wireless connectivity in remote, disaster-stricken areas lacking communication infrastructure. However, enabling UAVs to provide communications (e.g., UAVs acting as flying base stations) in real scenarios [...] Read more.
Unmanned Aerial Vehicles (UAVs) equipped with communication technologies have gained significant attention as a promising solution for providing wireless connectivity in remote, disaster-stricken areas lacking communication infrastructure. However, enabling UAVs to provide communications (e.g., UAVs acting as flying base stations) in real scenarios requires the integration of various technologies and algorithms. In particular 3D path planning algorithms are crucial in determining the optimal path free of obstacles so that UAVs in isolation or forming networks can provide wireless coverage in a specific region. Considering that most of the existing proposals in the literature only address path planning in a 2D environment, this paper systematically studies existing path-planning solutions in UAVs in a 3D environment in which optimization models (optimal and heuristics) have been applied. This paper analyzes 37 articles selected from 631 documents from a search in the Scopus database. This paper also presents an overview of UAV-enabled communications systems, the research questions, and the methodology for the systematic mapping study. In the end, this paper provides information about the objectives to be minimized or maximized, the optimization variables used, and the algorithmic strategies employed to solve the 3D path planning problem. Full article
Show Figures

Figure 1

17 pages, 3268 KiB  
Article
Spot Market Cloud Orchestration Using Task-Based Redundancy and Dynamic Costing
by Vyas O’Neill and Ben Soh
Future Internet 2023, 15(9), 288; https://doi.org/10.3390/fi15090288 - 27 Aug 2023
Viewed by 968
Abstract
Cloud computing has become ubiquitous in the enterprise environment as its on-demand model realizes technical and economic benefits for users. Cloud users demand a level of reliability, availability, and quality of service. Improvements to reliability generally come at the cost of additional replication. [...] Read more.
Cloud computing has become ubiquitous in the enterprise environment as its on-demand model realizes technical and economic benefits for users. Cloud users demand a level of reliability, availability, and quality of service. Improvements to reliability generally come at the cost of additional replication. Existing approaches have focused on the replication of virtual environments as a method of improving the reliability of cloud services. As cloud systems move towards microservices-based architectures, a more granular approach to replication is now possible. In this paper, we propose a cloud orchestration approach that balances the potential cost of failure with the spot market running cost, optimizing the resource usage of the cloud system. We present the results of empirical testing we carried out using a simulator to compare the outcome of our proposed approach to a control algorithm based on a static reliability requirement. Our empirical testing showed an improvement of between 37% and 72% in total cost over the control, depending on the specific characteristics of the cloud models tested. We thus propose that in clouds where the cost of failure can be reasonably approximated, our approach may be used to optimize the cloud redundancy configuration to achieve a lower total cost. Full article
Show Figures

Figure 1

Previous Issue
Next Issue
Back to TopTop