Next Issue
Volume 15, August
Previous Issue
Volume 15, June
 
 

Future Internet, Volume 15, Issue 7 (July 2023) – 28 articles

Cover Story (view full-size image): Cloud applications' data confidentiality is a major concern for operators, as Cloud providers are not always considered reliable. Moreover, the exploitation of third-party software increases the probability of vulnerabilities and attacks. As a countermeasure, Trusted Execution Environments (TEEs) emerged in the Cloud to increase software isolation both from providers and external attackers. We present a methodology and a declarative prototype to support applications deployment in TEEs by exploiting information-flow security to determine safe partitionings of software components. Through a probabilistic cost model, we enable application operators to select the best trade-off partitioning in terms of future re-partitioning costs and the number of domains. View this paper
  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Reader to open them.
Order results
Result details
Section
Select all
Export citation of selected articles as:
21 pages, 2406 KiB  
Article
Unveiling the Landscape of Operating System Vulnerabilities
by Manish Bhurtel and Danda B. Rawat
Future Internet 2023, 15(7), 248; https://doi.org/10.3390/fi15070248 - 24 Jul 2023
Viewed by 2866
Abstract
Operating systems play a crucial role in computer systems, serving as the fundamental infrastructure that supports a wide range of applications and services. However, they are also prime targets for malicious actors seeking to exploit vulnerabilities and compromise system security. This is a [...] Read more.
Operating systems play a crucial role in computer systems, serving as the fundamental infrastructure that supports a wide range of applications and services. However, they are also prime targets for malicious actors seeking to exploit vulnerabilities and compromise system security. This is a crucial area that requires active research; however, OS vulnerabilities have not been actively studied in recent years. Therefore, we conduct a comprehensive analysis of OS vulnerabilities, aiming to enhance the understanding of their trends, severity, and common weaknesses. Our research methodology encompasses data preparation, sampling of vulnerable OS categories and versions, and an in-depth analysis of trends, severity levels, and types of OS vulnerabilities. We scrape the high-level data from reliable and recognized sources to generate two refined OS vulnerability datasets: one for OS categories and another for OS versions. Our study reveals the susceptibility of popular operating systems such as Windows, Windows Server, Debian Linux, and Mac OS. Specifically, Windows 10, Windows 11, Android (v11.0, v12.0, v13.0), Windows Server 2012, Debian Linux (v10.0, v11.0), Fedora 37, and HarmonyOS 2, are identified as the most vulnerable OS versions in recent years (2021–2022). Notably, these vulnerabilities exhibit a high severity, with maximum CVSS scores falling into the 7–8 and 9–10 range. Common vulnerability types, including CWE-119, CWE-20, CWE-200, and CWE-787, are prevalent in these OSs and require specific attention from OS vendors. The findings on trends, severity, and types of OS vulnerabilities from this research will serve as a valuable resource for vendors, security professionals, and end-users, empowering them to enhance OS security measures, prioritize vulnerability management efforts, and make informed decisions to mitigate risks associated with these vulnerabilities. Full article
(This article belongs to the Section Smart System Infrastructure and Applications)
Show Figures

Figure 1

16 pages, 3048 KiB  
Article
A Carrying Method for 5G Network Slicing in Smart Grid Communication Services Based on Neural Network
by Yang Hu, Liangliang Gong, Xinyang Li, Hui Li, Ruoxin Zhang and Rentao Gu
Future Internet 2023, 15(7), 247; https://doi.org/10.3390/fi15070247 - 20 Jul 2023
Cited by 3 | Viewed by 1067
Abstract
When applying 5G network slicing technology, the operator’s network resources in the form of mutually isolated logical network slices provide specific service requirements and quality of service guarantees for smart grid communication services. In the face of the new situation of 5G, which [...] Read more.
When applying 5G network slicing technology, the operator’s network resources in the form of mutually isolated logical network slices provide specific service requirements and quality of service guarantees for smart grid communication services. In the face of the new situation of 5G, which comprises the surge in demand for smart grid communication services and service types, as well as the digital and intelligent development of communication networks, it is even more important to provide a self-intelligent resource allocation and carrying method when slicing resources are allocated. To this end, a carrying method based on a neural network is proposed. The objective is to establish a hierarchical scheduling system for smart grid communication services at the power smart gate-way at the edge, where intelligent classification matching of smart grid communication services to (i) adapt to the characteristics of 5G network slicing and (ii) dynamic prediction of traffic in the slicing network are both realized. This hierarchical scheduling system extracts the data features of the services and encodes the data through a one-dimensional Convolutional Neural Network (1D CNN) in order to achieve intelligent classification and matching of smart grid communication services. This system also combines with Bidirectional Long Short-Term Memory Neural Network (BILSTM) in order to achieve a dynamic prediction of time-series based traffic in the slicing network. The simulation results validate the feasibility of a service classification model based on a 1D CNN and a traffic prediction model based on BILSTM for smart grid communication services. Full article
Show Figures

Figure 1

19 pages, 645 KiB  
Article
An Accurate Platform for Investigating TCP Performance in Wi-Fi Networks
by Shunji Aoyagi, Yuki Horie, Do Thi Thu Hien, Thanh Duc Ngo, Duy-Dinh Le, Kien Nguyen and Hiroo Sekiya
Future Internet 2023, 15(7), 246; https://doi.org/10.3390/fi15070246 - 19 Jul 2023
Viewed by 1381
Abstract
An increasing number of devices are connecting to the Internet via Wi-Fi networks, ranging from mobile phones to Internet of Things (IoT) devices. Moreover, Wi-Fi technology has undergone gradual development, with various standards and implementations. In a Wi-Fi network, a Wi-Fi client typically [...] Read more.
An increasing number of devices are connecting to the Internet via Wi-Fi networks, ranging from mobile phones to Internet of Things (IoT) devices. Moreover, Wi-Fi technology has undergone gradual development, with various standards and implementations. In a Wi-Fi network, a Wi-Fi client typically uses the Transmission Control Protocol (TCP) for its applications. Hence, it is essential to understand and quantify the TCP performance in such an environment. This work presents an emulator-based approach for investigating the TCP performance in Wi-Fi networks in a time- and cost-efficient manner. We introduce a new platform, which leverages the Mininet-WiFi emulator to construct various Wi-Fi networks for investigation while considering actual TCP implementations. The platform uniquely includes tools and scripts to assess TCP performance in the Wi-Fi networks quickly. First, to confirm the accuracy of our platform, we compare the emulated results to the results in a real Wi-Fi network, where the bufferbloat problem may occur. The two results are not only similar but also usable for finding the bufferbloat condition under different methods of TCP congestion control. Second, we conduct a similar evaluation in scenarios with the Wi-Fi link as a bottleneck and those with varying signal strengths. Third, we use the platform to compare the fairness performance of TCP congestion control algorithms in a Wi-Fi network with multiple clients. The results show the efficiency and convenience of our platform in recognizing TCP behaviors. Full article
(This article belongs to the Section Internet of Things)
Show Figures

Figure 1

16 pages, 7206 KiB  
Article
An IoT System and MODIS Images Enable Smart Environmental Management for Mekong Delta
by Vu Hien Phan, Danh Phan Hong Pham, Tran Vu Pham, Kashif Naseer Qureshi and Cuong Pham-Quoc
Future Internet 2023, 15(7), 245; https://doi.org/10.3390/fi15070245 - 18 Jul 2023
Cited by 1 | Viewed by 1259
Abstract
The smart environmental management system proposed in this work offers a new approach to environmental monitoring by utilizing data from IoT stations and MODIS satellite imagery. The system is designed to be deployed in vast regions, such as the Mekong Delta, with low [...] Read more.
The smart environmental management system proposed in this work offers a new approach to environmental monitoring by utilizing data from IoT stations and MODIS satellite imagery. The system is designed to be deployed in vast regions, such as the Mekong Delta, with low building and operating costs, making it a cost-effective solution for environmental monitoring. The system leverages telemetry data collected by IoT stations in combination with MODIS MOD09GA, MOD11A1, and MCD19A2 daily image products to develop computational models that calculate the values land surface temperature (LST), 2.5 and 10 (µm) particulate matter mass concentrations (PM2.5 and PM10) in areas without IoT stations. The MOD09GA product provides land surface spectral reflectance from visible to shortwave infrared wavelengths to determine land cover types. The MOD11A1 product provides thermal infrared emission from the land surface to compute LST. The MCD19A2 product provides aerosol optical depth values to detect the presence of atmospheric aerosols, e.g., PM2.5 and PM10. The collected data, including remote sensing images and telemetry sensor data, are preprocessed to eliminate redundancy and stored in cloud storage services for further processing. This allows for automatic retrieval and computation of the data by the smart data processing engine, which is designed to process various data types including images and videos from cameras and drones. The calculated values are then made available through a graphic user interface (GUI) that can be accessed through both desktop and mobile devices. The GUI provides real-time visualization of the monitoring values, as well as alerts to administrators based on predetermined rules and values of the data. This allows administrators to easily monitor the system, configure the system by setting alerting rules or calibrating the ground stations, and take appropriate action in response to alerts. Experimental results from the implementation of the system in Dong Thap Province in the Mekong Delta show that the linear regression models for PM2.5 and PM10 estimations from MCD19A2 AOD values have correlation coefficients of 0.81 and 0.68, and RMSEs of 4.11 and 5.74 µg/m3, respectively. Computed LST values from MOD09GA and MOD11A1 reflectance and emission data have a correlation coefficient of 0.82 with ground measurements of air temperature. These errors are comparable to other models reported in similar regions in the literature, demonstrating the effectiveness and accuracy of the proposed system. Full article
Show Figures

Figure 1

42 pages, 4766 KiB  
Review
Self-Healing in Cyber–Physical Systems Using Machine Learning: A Critical Analysis of Theories and Tools
by Obinna Johnphill, Ali Safaa Sadiq, Feras Al-Obeidat, Haider Al-Khateeb, Mohammed Adam Taheir, Omprakash Kaiwartya and Mohammed Ali
Future Internet 2023, 15(7), 244; https://doi.org/10.3390/fi15070244 - 17 Jul 2023
Cited by 4 | Viewed by 2706
Abstract
The rapid advancement of networking, computing, sensing, and control systems has introduced a wide range of cyber threats, including those from new devices deployed during the development of scenarios. With recent advancements in automobiles, medical devices, smart industrial systems, and other technologies, system [...] Read more.
The rapid advancement of networking, computing, sensing, and control systems has introduced a wide range of cyber threats, including those from new devices deployed during the development of scenarios. With recent advancements in automobiles, medical devices, smart industrial systems, and other technologies, system failures resulting from external attacks or internal process malfunctions are increasingly common. Restoring the system’s stable state requires autonomous intervention through the self-healing process to maintain service quality. This paper, therefore, aims to analyse state of the art and identify where self-healing using machine learning can be applied to cyber–physical systems to enhance security and prevent failures within the system. The paper describes three key components of self-healing functionality in computer systems: anomaly detection, fault alert, and fault auto-remediation. The significance of these components is that self-healing functionality cannot be practical without considering all three. Understanding the self-healing theories that form the guiding principles for implementing these functionalities with real-life implications is crucial. There are strong indications that self-healing functionality in the cyber–physical system is an emerging area of research that holds great promise for the future of computing technology. It has the potential to provide seamless self-organising and self-restoration functionality to cyber–physical systems, leading to increased security of systems and improved user experience. For instance, a functional self-healing system implemented on a power grid will react autonomously when a threat or fault occurs, without requiring human intervention to restore power to communities and preserve critical services after power outages or defects. This paper presents the existing vulnerabilities, threats, and challenges and critically analyses the current self-healing theories and methods that use machine learning for cyber–physical systems. Full article
(This article belongs to the Section Smart System Infrastructure and Applications)
Show Figures

Figure 1

17 pages, 1160 KiB  
Article
Machine Learning for Network Intrusion Detection—A Comparative Study
by Mustafa Al Lail, Alejandro Garcia and Saul Olivo
Future Internet 2023, 15(7), 243; https://doi.org/10.3390/fi15070243 - 16 Jul 2023
Cited by 5 | Viewed by 2533
Abstract
Modern society has quickly evolved to utilize communication and data-sharing media with the advent of the internet and electronic technologies. However, these technologies have created new opportunities for attackers to gain access to confidential electronic resources. As a result, data breaches have significantly [...] Read more.
Modern society has quickly evolved to utilize communication and data-sharing media with the advent of the internet and electronic technologies. However, these technologies have created new opportunities for attackers to gain access to confidential electronic resources. As a result, data breaches have significantly impacted our society in multiple ways. To mitigate this situation, researchers have developed multiple security countermeasure techniques known as Network Intrusion Detection Systems (NIDS). Despite these techniques, attackers have developed new strategies to gain unauthorized access to resources. In this work, we propose using machine learning (ML) to develop a NIDS system capable of detecting modern attack types with a very high detection rate. To this end, we implement and evaluate several ML algorithms and compare their effectiveness using a state-of-the-art dataset containing modern attack types. The results show that the random forest model outperforms other models, with a detection rate of modern network attacks of 97 percent. This study shows that not only is accurate prediction possible but also a high detection rate of attacks can be achieved. These results indicate that ML has the potential to create very effective NIDS systems. Full article
(This article belongs to the Special Issue Anomaly Detection in Modern Networks)
Show Figures

Figure 1

13 pages, 2697 KiB  
Article
Optimization of the Decision Criterion for Increasing the Bandwidth Utilization by Means of the Novel Effective DBA Algorithm in NG-PON2 Networks
by Rastislav Róka
Future Internet 2023, 15(7), 242; https://doi.org/10.3390/fi15070242 - 15 Jul 2023
Viewed by 898
Abstract
In this paper, the reasons for the bandwidth and wavelength utilization in future next-generation passive optical networks are presented, and the possibilities for realization and utilization of extended dynamic wavelength and bandwidth algorithms for the second next-generation passive optical networks (NG-PON2) are analyzed. [...] Read more.
In this paper, the reasons for the bandwidth and wavelength utilization in future next-generation passive optical networks are presented, and the possibilities for realization and utilization of extended dynamic wavelength and bandwidth algorithms for the second next-generation passive optical networks (NG-PON2) are analyzed. Next, principles of the effective dynamic bandwidth allocation are introduced in detail, focused on the importance of the decision criterion optimization. To achieve a better bandwidth utilization of dedicated wavelengths in NG-PON2 networks, this paper is focused on the novel effective dynamic bandwidth allocation algorithm with adaptive allocation of wavelengths to optical network units as well as the optimization of the decision criterion. The algorithm and the proposed method are tested and evaluated through simulation with actual traffic data. For analyzing novel extended dynamic wavelength and bandwidth algorithms used for various cases of wavelength allocation in NG-PON2 networks, the effective dynamic bandwidth allocation algorithm analysis is realized in the enhancement of simulation program. Finally, an optimization of the decision criterion defining a minimum bandwidth utilization of the actual wavelength is executed for NG-PON2 networks based on the hybrid time and wavelength division multiplexing technique. Full article
Show Figures

Figure 1

17 pages, 2320 KiB  
Article
Analysis of ICS and SCADA Systems Attacks Using Honeypots
by Mohamed Mesbah, Mahmoud Said Elsayed, Anca Delia Jurcut and Marianne Azer
Future Internet 2023, 15(7), 241; https://doi.org/10.3390/fi15070241 - 14 Jul 2023
Cited by 1 | Viewed by 3334
Abstract
Supervisory control and data acquisition (SCADA) attacks have increased due to the digital transformation of many industrial control systems (ICS). Operational technology (OT) operators should use the defense-in-depth concept to secure their operations from cyber attacks and reduce the surface that can be [...] Read more.
Supervisory control and data acquisition (SCADA) attacks have increased due to the digital transformation of many industrial control systems (ICS). Operational technology (OT) operators should use the defense-in-depth concept to secure their operations from cyber attacks and reduce the surface that can be attacked. Layers of security, such as firewalls, endpoint solutions, honeypots, etc., should be used to secure traditional IT systems. The three main goals of IT cybersecurity are confidentiality, integrity, and availability (CIA), but these three goals have different levels of importance in the operational technology (OT) industry. Availability comes before confidentiality and integrity because of the criticality of business in OT. One of the layers of security in both IT and OT is honeypots. SCADA honeypots are used as a layer of security to mitigate attacks, known attackers’ techniques, and network and system weaknesses that attackers may use, and to mitigate these vulnerabilities. In this paper, we use SCADA honeypots for early detection of potential malicious tampering within a SCADA device network, and to determine threats against ICS/SCADA networks. An analysis of SCADA honeypots gives us the ability to know which protocols are most commonly attacked, and attackers’ behaviors, locations, and goals. We use an ICS/SCADA honeypot called Conpot, which simulates real ICS/SCADA systems with some ICS protocols and ICS/SCADA PLCs. Full article
(This article belongs to the Section Cybersecurity)
Show Figures

Figure 1

20 pages, 9643 KiB  
Article
Acoustic TDOA Measurement and Accurate Indoor Positioning for Smartphone
by Bingbing Cheng and Jiao Wu
Future Internet 2023, 15(7), 240; https://doi.org/10.3390/fi15070240 - 13 Jul 2023
Viewed by 1385
Abstract
The global satellite navigation signal works well in open areas outdoors. However, due to its weakness, it is challenging to position continuously and reliably indoors. In this paper, we developed a hybrid system that combines radio signals and acoustic signals to achieve decimeter-level [...] Read more.
The global satellite navigation signal works well in open areas outdoors. However, due to its weakness, it is challenging to position continuously and reliably indoors. In this paper, we developed a hybrid system that combines radio signals and acoustic signals to achieve decimeter-level positioning indoors. Specifically, acoustic transmitters are synchronized with different codes. At the same time, our decoding scheme only requires a simple cross-correlation operation without time-frequency analysis. Secondly, acoustic signals will be reflected by glass, walls and other obstacles in the indoor environment. Time difference of arrival (TDOA) measurement accuracy is seriously affected. We developed a robust first path detection algorithm to obtain reliable TDOA measurement values. Finally, we combine the maximum likelihood (ML) algorithm with the proposed TDOA measurement method to obtain the location of the smartphone. We carried out static positioning experiments for smartphones in two scenes. The experimental results show that the average positioning error of the system is less than 0.5 m. Our system has the following advantages: (1) smartphone access. (2) an unlimited number of users. (3) easily deployed acoustic nodes. (4) decimeter-level positioning accuracy. Full article
Show Figures

Figure 1

27 pages, 1266 KiB  
Article
Beyond the Semantic Web: Towards an Implicit Pragmatic Web and a Web of Social Representations
by Yannis Haralambous and Philippe Lenca
Future Internet 2023, 15(7), 239; https://doi.org/10.3390/fi15070239 - 13 Jul 2023
Cited by 1 | Viewed by 1784
Abstract
Motivated by the distinction between semantics and pragmatics as sub-disciplines of linguistics, shortly after Tim Berners-Lee introduced the Semantic Web in 2001, there have been works on its extension to the “pragmatic level”. Twenty years later, the Semantic Web is more popular than [...] Read more.
Motivated by the distinction between semantics and pragmatics as sub-disciplines of linguistics, shortly after Tim Berners-Lee introduced the Semantic Web in 2001, there have been works on its extension to the “pragmatic level”. Twenty years later, the Semantic Web is more popular than ever, while little has been achieved in extending it into a Pragmatic Web. Social representations introduced by Serge Moscovici in the 1960s seem totally ignored by the information technology community even though they are strongly related to research on opinion mining and representation in social media. We, thus, recall the major results of academic research on the Pragmatic Web, followed by our proposal for an Implicit Pragmatic Web inspired by various sub-domains of the discipline of pragmatics. We further recall the basics of the social representations theory and discuss their potential implementations in a Web of Social Representations and thus their potential contribution towards at least a part of the future internet. Full article
Show Figures

Figure 1

16 pages, 1015 KiB  
Article
Assisting Drivers at Stop Signs in a Connected Vehicle Environment
by Maram Bani Younes
Future Internet 2023, 15(7), 238; https://doi.org/10.3390/fi15070238 - 08 Jul 2023
Viewed by 983
Abstract
Road intersections are shared among several conflicted traffic flows. Stop signs are used to control competing traffic flows at road intersections safely. Then, driving rules are constructed to control the competing traffic flows at these stop sign road intersections. Vehicles must apply a [...] Read more.
Road intersections are shared among several conflicted traffic flows. Stop signs are used to control competing traffic flows at road intersections safely. Then, driving rules are constructed to control the competing traffic flows at these stop sign road intersections. Vehicles must apply a complete stop with no motion in front of stop signs. First to arrive, first to go, straight before turns, and right then left are the main driving rules at stop sign intersections. Drivers must be aware of the stop sign’s existence, the architecture of the road intersection, and traffic distribution in the competing traffic flows. This is to make the best decision to pass the intersection or wait for other conflicted flows to pass according to the current situation. Due to bad weather conditions, obstacles, or existing heavy vehicles, drivers may miss capturing the stop sign. Moreover, the architecture of the road intersection and the characteristics of the competing traffic flows are not always clear to the drivers. In this work, we aim to keep the driver aware ahead of time of the existing stop signs, the architecture of the road intersection, and the traffic characteristics of the competing traffic flow at the targeted destination. Moreover, the best speed and driving behaviors are recommended to each driver. This is based on his/her position and the distribution of the existing traffic there. A driving assistance protocol is presented in this paper based on vehicular network technology. Real-time traffic characteristics are gathered and analyzed of vehicles around the intersections. Then, the best action for each vehicle is recommended accordingly. The experimental results show that the proposed driving assistant protocol successfully enhances the safety conditions around road intersections controlled by stop signs. This is by reducing the percentage of accident occurrences. Fortunately, the traffic efficiency of these road intersections is also enhanced; the accident percentage is decreased by 25% upon using the proposed protocol. Full article
Show Figures

Figure 1

28 pages, 22171 KiB  
Article
A Cyber-Physical System for Wildfire Detection and Firefighting
by Pietro Battistoni, Andrea Antonio Cantone, Gerardo Martino, Valerio Passamano, Marco Romano, Monica Sebillo and Giuliana Vitiello
Future Internet 2023, 15(7), 237; https://doi.org/10.3390/fi15070237 - 06 Jul 2023
Cited by 7 | Viewed by 2521
Abstract
The increasing frequency and severity of forest fires necessitate early detection and rapid response to mitigate their impact. This project aims to design a cyber-physical system for early detection and rapid response to forest fires using advanced technologies. The system incorporates Internet of [...] Read more.
The increasing frequency and severity of forest fires necessitate early detection and rapid response to mitigate their impact. This project aims to design a cyber-physical system for early detection and rapid response to forest fires using advanced technologies. The system incorporates Internet of Things sensors and autonomous unmanned aerial and ground vehicles controlled by the robot operating system. An IoT-based wildfire detection node continuously monitors environmental conditions, enabling early fire detection. Upon fire detection, a UAV autonomously surveys the area to precisely locate the fire and can deploy an extinguishing payload or provide data for decision-making. The UAV communicates the fire’s precise location to a collaborative UGV, which autonomously reaches the designated area to support ground-based firefighters. The CPS includes a ground control station with web-based dashboards for real-time monitoring of system parameters and telemetry data from UAVs and UGVs. The article demonstrates the real-time fire detection capabilities of the proposed system using simulated forest fire scenarios. The objective is to provide a practical approach using open-source technologies for early detection and extinguishing of forest fires, with potential applications in various industries, surveillance, and precision agriculture. Full article
Show Figures

Figure 1

18 pages, 11580 KiB  
Article
Using a Graph Engine to Visualize the Reconnaissance Tactic of the MITRE ATT&CK Framework from UWF-ZeekData22
by Sikha S. Bagui, Dustin Mink, Subhash C. Bagui, Michael Plain, Jadarius Hill and Marshall Elam
Future Internet 2023, 15(7), 236; https://doi.org/10.3390/fi15070236 - 06 Jul 2023
Viewed by 1744
Abstract
There has been a great deal of research in the area of using graph engines and graph databases to model network traffic and network attacks, but the novelty of this research lies in visually or graphically representing the Reconnaissance Tactic (TA0043) of the [...] Read more.
There has been a great deal of research in the area of using graph engines and graph databases to model network traffic and network attacks, but the novelty of this research lies in visually or graphically representing the Reconnaissance Tactic (TA0043) of the MITRE ATT&CK framework. Using the newly created dataset, UWF-Zeekdata22, based on the MITRE ATT&CK framework, patterns involving network connectivity, connection duration, and data volume were found and loaded into a graph environment. Patterns were also found in the graphed data that matched the Reconnaissance as well as other tactics captured by UWF-Zeekdata22. The star motif was particularly useful in mapping the Reconnaissance Tactic. The results of this paper show that graph databases/graph engines can be essential tools for understanding network traffic and trying to detect network intrusions before they happen. Finally, an analysis of the runtime performance of the reduced dataset used to create the graph databases showed that the reduced datasets performed better than the full dataset. Full article
(This article belongs to the Special Issue Graph Machine Learning and Complex Networks)
Show Figures

Figure 1

21 pages, 1044 KiB  
Article
Enhancing Collaborative Filtering-Based Recommender System Using Sentiment Analysis
by Ikram Karabila, Nossayba Darraz, Anas El-Ansari, Nabil Alami and Mostafa El Mallahi
Future Internet 2023, 15(7), 235; https://doi.org/10.3390/fi15070235 - 05 Jul 2023
Cited by 5 | Viewed by 3079
Abstract
Recommendation systems (RSs) are widely used in e-commerce to improve conversion rates by aligning product offerings with customer preferences and interests. While traditional RSs rely solely on numerical ratings to generate recommendations, these ratings alone may not be sufficient to offer personalized and [...] Read more.
Recommendation systems (RSs) are widely used in e-commerce to improve conversion rates by aligning product offerings with customer preferences and interests. While traditional RSs rely solely on numerical ratings to generate recommendations, these ratings alone may not be sufficient to offer personalized and accurate suggestions. To overcome this limitation, additional sources of information, such as reviews, can be utilized. However, analyzing and understanding the information contained within reviews, which are often unstructured data, is a challenging task. To address this issue, sentiment analysis (SA) has attracted considerable attention as a tool to better comprehend a user’s opinions, emotions, and attitudes. In this study, we propose a novel RS that leverages ensemble learning by integrating sentiment analysis of textual data with collaborative filtering techniques to provide users with more precise and individualized recommendations. Our system was developed in three main steps. Firstly, we used unsupervised “GloVe” vectorization for better classification performance and built a sentiment model based on Bidirectional Long Short-Term Memory (Bi-LSTM). Secondly, we developed a recommendation model based on collaborative filtering techniques. Lastly, we integrated our sentiment analysis model into the RS. Our proposed model of SA achieved an accuracy score of 93%, which is superior to other models. The results of our study indicate that our approach enhances the accuracy of the recommendation system. Overall, our proposed system offers customers a more reliable and personalized recommendation service in e-commerce. Full article
Show Figures

Figure 1

18 pages, 683 KiB  
Article
Intelligent Video Streaming at Network Edge: An Attention-Based Multiagent Reinforcement Learning Solution
by Xiangdong Tang, Fei Chen and Yunlong He
Future Internet 2023, 15(7), 234; https://doi.org/10.3390/fi15070234 - 03 Jul 2023
Cited by 1 | Viewed by 1310
Abstract
Video viewing is currently the primary form of entertainment for modern people due to the rapid development of mobile devices and 5G networks. The combination of pervasive edge devices and adaptive bitrate streaming technologies can lessen the effects of network changes, boosting user [...] Read more.
Video viewing is currently the primary form of entertainment for modern people due to the rapid development of mobile devices and 5G networks. The combination of pervasive edge devices and adaptive bitrate streaming technologies can lessen the effects of network changes, boosting user quality of experience (QoE). Even while edge servers can offer near-end services to local users, it is challenging to accommodate a high number of mobile users in a dynamic environment due to their restricted capacity to maximize user long-term QoE. We are motivated to integrate user allocation and bitrate adaptation into one optimization objective and propose a multiagent reinforcement learning method combined with an attention mechanism to solve the problem of multiedge servers cooperatively serving users. Through comparative experiments, we demonstrate the superiority of our proposed solution in various network configurations. To tackle the edge user allocation problem, we proposed a method called attention-based multiagent reinforcement learning (AMARL), which optimized the problem in two directions, i.e., maximizing the QoE of users and minimizing the number of leased edge servers. The performance of AMARL is proved by experiments. Full article
(This article belongs to the Special Issue Edge and Fog Computing for the Internet of Things)
Show Figures

Figure 1

3 pages, 161 KiB  
Editorial
Developments of Computer Vision and Image Processing: Methodologies and Applications
by Manuel J. C. S. Reis
Future Internet 2023, 15(7), 233; https://doi.org/10.3390/fi15070233 - 30 Jun 2023
Viewed by 921
Abstract
The rapid advancement of technology has enabled a vast and ever-growing number of computer applications in real scenarios of our daily life [...] Full article
20 pages, 486 KiB  
Article
Investigation on Self-Admitted Technical Debt in Open-Source Blockchain Projects
by Andrea Pinna, Maria Ilaria Lunesu, Stefano Orrù and Roberto Tonelli
Future Internet 2023, 15(7), 232; https://doi.org/10.3390/fi15070232 - 30 Jun 2023
Cited by 1 | Viewed by 1216
Abstract
Technical debt refers to decisions made during the design and development of software that postpone the resolution of technical problems or the enhancement of the software’s features to a later date. If not properly managed, technical debt can put long-term software quality and [...] Read more.
Technical debt refers to decisions made during the design and development of software that postpone the resolution of technical problems or the enhancement of the software’s features to a later date. If not properly managed, technical debt can put long-term software quality and maintainability at risk. Self-admitted technical debt is defined as the addition of specific comments to source code as a result of conscious and deliberate decisions to accumulate technical debt. In this paper, we will look at the presence of self-admitted technical debt in open-source blockchain projects, which are characterized by the use of a relatively novel technology and the need to generate trust. The self-admitted technical debt was analyzed using NLP techniques for the classification of comments extracted from the source code of ten projects chosen based on capitalization and popularity. The analysis of self-admitted technical debt in blockchain projects was compared with the results of previous non-blockchain open-source project analyses. The findings show that self-admitted design technical debt outnumbers requirement technical debt in blockchain projects. The analysis discovered that some projects had a low percentage of self-admitted technical debt in the comments but a high percentage of source code files with debt. In addition, self-admitted technical debt is on average more prevalent in blockchain projects and more equally distributed than in reference Java projects.If not managed, the relatively high presence of detected technical debt in blockchain projects could represent a threat to the needed trust between the blockchain system and the users. Blockchain projects development teams could benefit from self-admitted technical debt detection for targeted technical debt management. Full article
(This article belongs to the Special Issue State-of-the-Art Future Internet Technology in Italy 2022–2023)
Show Figures

Figure 1

27 pages, 5489 KiB  
Article
A New AI-Based Semantic Cyber Intelligence Agent
by Fahim Sufi
Future Internet 2023, 15(7), 231; https://doi.org/10.3390/fi15070231 - 29 Jun 2023
Cited by 8 | Viewed by 2021
Abstract
The surge in cybercrime has emerged as a pressing concern in contemporary society due to its far-reaching financial, social, and psychological repercussions on individuals. Beyond inflicting monetary losses, cyber-attacks exert adverse effects on the social fabric and psychological well-being of the affected individuals. [...] Read more.
The surge in cybercrime has emerged as a pressing concern in contemporary society due to its far-reaching financial, social, and psychological repercussions on individuals. Beyond inflicting monetary losses, cyber-attacks exert adverse effects on the social fabric and psychological well-being of the affected individuals. In order to mitigate the deleterious consequences of cyber threats, adoption of an intelligent agent-based solution to enhance the speed and comprehensiveness of cyber intelligence is advocated. In this paper, a novel cyber intelligence solution is proposed, employing four semantic agents that interact autonomously to acquire crucial cyber intelligence pertaining to any given country. The solution leverages a combination of techniques, including a convolutional neural network (CNN), sentiment analysis, exponential smoothing, latent Dirichlet allocation (LDA), term frequency-inverse document frequency (TF-IDF), Porter stemming, and others, to analyse data from both social media and web sources. The proposed method underwent evaluation from 13 October 2022 to 6 April 2023, utilizing a dataset comprising 37,386 tweets generated by 30,706 users across 54 languages. To address non-English content, a total of 8199 HTTP requests were made to facilitate translation. Additionally, the system processed 238,220 cyber threat data from the web. Within a remarkably brief duration of 6 s, the system autonomously generated a comprehensive cyber intelligence report encompassing 7 critical dimensions of cyber intelligence for countries such as Russia, Ukraine, China, Iran, India, and Australia. Full article
(This article belongs to the Special Issue Semantic Web Services for Multi-Agent Systems)
Show Figures

Figure 1

18 pages, 587 KiB  
Article
Synonyms, Antonyms and Factual Knowledge in BERT Heads
by Lorenzo Serina, Luca Putelli, Alfonso Emilio Gerevini and Ivan Serina
Future Internet 2023, 15(7), 230; https://doi.org/10.3390/fi15070230 - 29 Jun 2023
Cited by 2 | Viewed by 1373
Abstract
In recent years, many studies have been devoted to discovering the inner workings of Transformer-based models, such as BERT, for instance, attempting to identify what information is contained within them. However, little is known about how these models store this information in their [...] Read more.
In recent years, many studies have been devoted to discovering the inner workings of Transformer-based models, such as BERT, for instance, attempting to identify what information is contained within them. However, little is known about how these models store this information in their millions of parameters and which parts of the architecture are the most important. In this work, we propose an approach to identify self-attention mechanisms, called heads, that contain semantic and real-world factual knowledge in BERT. Our approach includes a metric computed from attention weights and exploits a standard clustering algorithm for extracting the most relevant connections between tokens in a head. In our experimental analysis, we focus on how heads can connect synonyms, antonyms and several types of factual knowledge regarding subjects such as geography and medicine. Full article
(This article belongs to the Section Big Data and Augmented Intelligence)
Show Figures

Figure 1

31 pages, 8728 KiB  
Article
Hybridizing Fuzzy String Matching and Machine Learning for Improved Ontology Alignment
by Mohammed Suleiman Mohammed Rudwan and Jean Vincent Fonou-Dombeu
Future Internet 2023, 15(7), 229; https://doi.org/10.3390/fi15070229 - 28 Jun 2023
Cited by 2 | Viewed by 1449
Abstract
Ontology alignment has become an important process for identifying similarities and differences between ontologies, to facilitate their integration and reuse. To this end, fuzzy string-matching algorithms have been developed for strings similarity detection and have been used in ontology alignment. However, a significant [...] Read more.
Ontology alignment has become an important process for identifying similarities and differences between ontologies, to facilitate their integration and reuse. To this end, fuzzy string-matching algorithms have been developed for strings similarity detection and have been used in ontology alignment. However, a significant limitation of existing fuzzy string-matching algorithms is their reliance on lexical/syntactic contents of ontology only, which do not capture semantic features of ontologies. To address this limitation, this paper proposed a novel method that hybridizes fuzzy string-matching algorithms and the Deep Bidirectional Transformer (BERT) deep learning model with three machine learning regression classifiers, namely, K-Nearest Neighbor Regression (kNN), Decision Tree Regression (DTR), and Support Vector Regression (SVR), to perform the alignment of ontologies. The use of the kNN, SVR, and DTR classifiers in the proposed method resulted in the building of three similarity models (SM), encoded SM-kNN, SM-SVR, and SM-DTR, respectively. The experiments were conducted on a dataset obtained from the anatomy track in the Ontology Alignment and Evaluation Initiative 2022 (OAEI 2022). The performances of the SM-kNN, SM-SVR, and SM-DTR models were evaluated using various metrics including precision, recall, F1-score, and accuracy at thresholds 0.70, 0.80, and 0.90, as well as error rates and running times. The experimental results revealed that the SM-SVR model achieved the best recall of 1.0, while the SM-DTR model exhibited the best precision, accuracy, and F1-score of 0.98, 0.97, and 0.98, respectively. Furthermore, the results showed that the SM-kNN, SM-SVR, and SM-DTR models outperformed state-of-the-art alignment systems that participated in the OAEI 2022 challenge, indicating the superior capability of the proposed method. Full article
Show Figures

Figure 1

26 pages, 496 KiB  
Article
KubeHound: Detecting Microservices’ Security Smells in Kubernetes Deployments
by Giorgio Dell’Immagine, Jacopo Soldani and Antonio Brogi
Future Internet 2023, 15(7), 228; https://doi.org/10.3390/fi15070228 - 26 Jun 2023
Cited by 3 | Viewed by 1892
Abstract
As microservice-based architectures are increasingly adopted, microservices security has become a crucial aspect to consider for IT businesses. Starting from a set of “security smells” for microservice applications that were recently proposed in the literature, we enable the automatic detection of such smells [...] Read more.
As microservice-based architectures are increasingly adopted, microservices security has become a crucial aspect to consider for IT businesses. Starting from a set of “security smells” for microservice applications that were recently proposed in the literature, we enable the automatic detection of such smells in microservice applications deployed with Kubernetes. We first introduce possible analysis techniques to automatically detect security smells in Kubernetes-deployed microservices. We then demonstrate the practical applicability of the proposed techniques by introducing KubeHound, an extensible prototype tool for automatically detecting security smells in microservice applications, and which already features a selected subset of the discussed analyses. We finally show that KubeHound can effectively detect instances of security smells in microservice applications by means of controlled experiments and by applying it to existing, third-party applications. Full article
(This article belongs to the Special Issue Information and Future Internet Security, Trust and Privacy II)
Show Figures

Figure 1

29 pages, 1264 KiB  
Article
Artificial Intelligence in Virtual Telemedicine Triage: A Respiratory Infection Diagnosis Tool with Electronic Measuring Device
by Naythan Villafuerte, Santiago Manzano, Paulina Ayala and Marcelo V. García
Future Internet 2023, 15(7), 227; https://doi.org/10.3390/fi15070227 - 25 Jun 2023
Cited by 2 | Viewed by 2098
Abstract
Due to the similarities in symptomatology between COVID-19 and other respiratory infections, diagnosis of these diseases can be complicated. To address this issue, a web application was developed that employs a chatbot and artificial intelligence to detect COVID-19, the common cold, and allergic [...] Read more.
Due to the similarities in symptomatology between COVID-19 and other respiratory infections, diagnosis of these diseases can be complicated. To address this issue, a web application was developed that employs a chatbot and artificial intelligence to detect COVID-19, the common cold, and allergic rhinitis. The application also integrates an electronic device that connects to the app and measures vital signs such as heart rate, blood oxygen saturation, and body temperature using two ESP8266 microcontrollers. The measured data are displayed on an OLED screen and sent to a Google Cloud server using the MQTT protocol. The AI algorithm accurately determines the respiratory disease that the patient is suffering from, achieving an accuracy rate of 0.91% after the symptomatology is entered. The app includes a user interface that allows patients to view their medical history of consultations with the assistant. The app was developed using HTML, CSS, JavaScript, MySQL, and Bootstrap 5 tools, resulting in a responsive, dynamic, and robust application that is secure for both the user and the server. Overall, this app provides an efficient and reliable way to diagnose respiratory infections using the power of artificial intelligence. Full article
(This article belongs to the Special Issue Telemedicine Applications in the Internet of Things)
Show Figures

Figure 1

18 pages, 7232 KiB  
Article
Exploiting Misconfiguration Vulnerabilities in Microsoft’s Azure Active Directory for Privilege Escalation Attacks
by Ibrahim Bu Haimed, Marwan Albahar and Ali Alzubaidi
Future Internet 2023, 15(7), 226; https://doi.org/10.3390/fi15070226 - 23 Jun 2023
Cited by 2 | Viewed by 2700
Abstract
Cloud services provided by Microsoft are growing rapidly in number and importance. Azure Active Directory (AAD) is becoming more important due to its role in facilitating identity management for cloud-based services. However, several risks and security issues have been associated with cloud systems [...] Read more.
Cloud services provided by Microsoft are growing rapidly in number and importance. Azure Active Directory (AAD) is becoming more important due to its role in facilitating identity management for cloud-based services. However, several risks and security issues have been associated with cloud systems due to vulnerabilities associated with identity management systems. In particular, misconfigurations could severely impact the security of cloud-based systems. Accordingly, this study identifies and experimentally evaluates exploitable misconfiguration vulnerabilities in Azure AD which can eventually lead to the risk of privilege escalation attacks. The study focuses on two scenarios: dynamic group settings and the activation of the Managed Identity feature on virtual devices. Through experimental evaluation, the research demonstrates the successful execution of these attacks, resulting in unauthorized access to sensitive information. Finally, we suggest several approaches to prevent such attacks by isolating sensitive systems to minimize the possibility of damage resulting from a misconfiguration accident and highlight the need for further studies. Full article
(This article belongs to the Special Issue Privacy and Cybersecurity in the Artificial Intelligence Age)
Show Figures

Figure 1

34 pages, 5207 KiB  
Article
An Ontology for Spatio-Temporal Media Management and an Interactive Application
by Takuro Sone, Shin Kato, Ray Atarashi, Jin Nakazato, Manabu Tsukada and Hiroshi Esaki
Future Internet 2023, 15(7), 225; https://doi.org/10.3390/fi15070225 - 23 Jun 2023
Cited by 1 | Viewed by 1330
Abstract
In addition to traditional viewing media, metadata that record the physical space from multiple perspectives will become extremely important in realizing interactive applications such as Virtual Reality (VR) and Augmented Reality (AR). This paper proposes the Software Defined Media (SDM) Ontology designed to [...] Read more.
In addition to traditional viewing media, metadata that record the physical space from multiple perspectives will become extremely important in realizing interactive applications such as Virtual Reality (VR) and Augmented Reality (AR). This paper proposes the Software Defined Media (SDM) Ontology designed to describe spatio-temporal media and the systems that handle them comprehensively. Spatio-temporal media refers to video, audio, and various sensor values recorded together with time and location information. The SDM Ontology can flexibly and precisely represent spatio-temporal media, equipment, and functions that record, process, edit, and play them, as well as related semantic information. In addition, we recorded classical and jazz concerts using many video cameras and audio microphones, and then processed and edited the video and audio data with related metadata. Then, we created a dataset using the SDM Ontology and published it as linked open data (LOD). Furthermore, we developed “Web3602”, an application that enables users to interactively view and experience 360 video and spatial acoustic sounds by referring to this dataset. We conducted a subjective evaluation by using a user questionnaire. Web3602 is a data-driven web application that obtains video and audio data and related metadata by querying the dataset. Full article
(This article belongs to the Special Issue Semantic and Social Internet of Things)
Show Figures

Figure 1

38 pages, 1513 KiB  
Article
Secure Partitioning of Cloud Applications, with Cost Look-Ahead
by Alessandro Bocci, Stefano Forti, Roberto Guanciale, Gian-Luigi Ferrari and Antonio Brogi
Future Internet 2023, 15(7), 224; https://doi.org/10.3390/fi15070224 - 22 Jun 2023
Viewed by 946
Abstract
The security of Cloud applications is a major concern for application developers and operators. Protecting users’ data confidentiality requires methods to avoid leakage from vulnerable software and unreliable Cloud providers. Recently, trusted execution environments (TEEs) emerged in Cloud settings to isolate applications from [...] Read more.
The security of Cloud applications is a major concern for application developers and operators. Protecting users’ data confidentiality requires methods to avoid leakage from vulnerable software and unreliable Cloud providers. Recently, trusted execution environments (TEEs) emerged in Cloud settings to isolate applications from the privileged access of Cloud providers. Such hardware-based technologies exploit separation kernels, which aim at safely isolating the software components of applications. In this article, we propose a methodology to determine safe partitionings of Cloud applications to be deployed on TEEs. Through a probabilistic cost model, we enable application operators to select the best trade-off partitioning in terms of future re-partitioning costs and the number of domains. To the best of our knowledge, no previous proposal exists addressing such a problem. We exploit information-flow security techniques to protect the data confidentiality of applications by relying on declarative methods to model applications and their data flow. The proposed solution is assessed by executing a proof-of-concept implementation that shows the relationship among the future partitioning costs, number of domains and execution times. Full article
(This article belongs to the Collection Information Systems Security)
Show Figures

Figure 1

16 pages, 1922 KiB  
Article
Heart DT: Monitoring and Preventing Cardiac Pathologies Using AI and IoT Sensors
by Roberta Avanzato, Francesco Beritelli, Alfio Lombardo and Carmelo Ricci
Future Internet 2023, 15(7), 223; https://doi.org/10.3390/fi15070223 - 22 Jun 2023
Cited by 5 | Viewed by 1404
Abstract
Today’s healthcare facilities require new digital tools to cope with the rapidly increasing demand for technology that can support healthcare operators. The advancement of technology is leading to the pervasive use of IoT devices in daily life, capable of acquiring biomedical and biometric [...] Read more.
Today’s healthcare facilities require new digital tools to cope with the rapidly increasing demand for technology that can support healthcare operators. The advancement of technology is leading to the pervasive use of IoT devices in daily life, capable of acquiring biomedical and biometric parameters, and providing an opportunity to activate new tools for the medical community. Digital twins (DTs) are a form of technology that are gaining more prominence in these scenarios. Many scientific research papers in the literature are combining artificial intelligence (AI) with DTs. In this work, we propose a case study including a proof of concept based on microservices, the heart DT, for the evaluation of electrocardiogram (ECG) signals by means of an artificial intelligence component. In addition, a higher-level platform is presented and described for the complete management and monitoring of cardiac pathologies. The overall goal is to provide a system that can facilitate the patient–doctor relationship, improve medical treatment times, and reduce costs. Full article
(This article belongs to the Special Issue State-of-the-Art Future Internet Technology in Italy 2022–2023)
Show Figures

Figure 1

18 pages, 3674 KiB  
Article
Bus Travel Time Prediction Based on the Similarity in Drivers’ Driving Styles
by Zhenzhong Yin and Bin Zhang
Future Internet 2023, 15(7), 222; https://doi.org/10.3390/fi15070222 - 21 Jun 2023
Cited by 2 | Viewed by 1234
Abstract
Providing accurate and real-time bus travel time information is crucial for both passengers and public transportation managers. However, in the traditional bus travel time prediction model, due to the lack of consideration of the influence of different bus drivers’ driving styles on the [...] Read more.
Providing accurate and real-time bus travel time information is crucial for both passengers and public transportation managers. However, in the traditional bus travel time prediction model, due to the lack of consideration of the influence of different bus drivers’ driving styles on the bus travel time, the prediction result is not ideal. In the traditional bus travel time prediction model, the historical travel data of all drivers in the entire bus line are usually used for training and prediction. Due to great differences in individual driving styles, the eigenvalues of drivers’ driving parameters are widely distributed. Therefore, the prediction accuracy of the model trained by this dataset is low. At the same time, the training time of the model is too long due to the large sample size, making it difficult to provide a timely prediction in practical applications. However, if only the historical dataset of a single driver is used for training and prediction, the amount of training data is too small, and it is also difficult to accurately predict travel time. To solve these problems, this paper proposes a method to predict bus travel times based on the similarity of drivers’ driving styles. Firstly, the historical travel time data of different drivers are clustered, and then the corresponding types of drivers’ historical data are used to predict the travel time, so as to improve the accuracy and speed of the travel time prediction. We evaluated our approach using a real-world bus trajectory dataset collected in Shenyang, China. The experimental results show that the accuracy of the proposed method is 13.4% higher than that of the traditional method. Full article
(This article belongs to the Special Issue Artificial Intelligence for Smart Cities)
Show Figures

Figure 1

23 pages, 11246 KiB  
Article
Cache-Enabled Adaptive Video Streaming: A QoE-Based Evaluation Study
by Eirini Liotou, Dionysis Xenakis, Vasiliki Georgara, Georgios Kourouniotis and Lazaros Merakos
Future Internet 2023, 15(7), 221; https://doi.org/10.3390/fi15070221 - 21 Jun 2023
Cited by 1 | Viewed by 1506
Abstract
Dynamic Adaptive Streaming over HTTP (DASH) has prevailed as the dominant way of video transmission over the Internet. This technology is based on receiving small sequential video segments from a server. However, one challenge that has not been adequately examined is the obtainment [...] Read more.
Dynamic Adaptive Streaming over HTTP (DASH) has prevailed as the dominant way of video transmission over the Internet. This technology is based on receiving small sequential video segments from a server. However, one challenge that has not been adequately examined is the obtainment of video segments in a way that serves both the needs of the network and the improvement in the Quality of Experience (QoE) of the users. One effective way to achieve this is to implement and study caching and DASH technologies together. This paper investigates this issue by simulating a network with multiple video servers and a video client. It then implements both the peer-to-many communications in the context of adaptive video streaming and the video server caching algorithm based on proposed criteria that improve the status of the network and/or the user. Specifically, we investigate the scenario of delivering DASH-based content with the help of an intermediate server, apart from a main server, to demonstrate possible caching benefits for different sizes of intermediate storage servers. Extensive experimentation using emulation reveals the interplay and delicate balance between caching and DASH, guiding such network design decisions. A general tendency found is that, as the available buffer size increases, the video playback quality increases to some extent. However, at the same time, this improvement is linked to the random cache selection algorithm. Full article
Show Figures

Figure 1

Previous Issue
Next Issue
Back to TopTop