Next Issue
Volume 14, May
Previous Issue
Volume 14, March
 
 

Future Internet, Volume 14, Issue 4 (April 2022) – 25 articles

Cover Story (view full-size image): 41 participants were subjected to different types of training on phishing in an experiment. They were asked to assume a persona and open the inbox of that persona. They were given the task of removing all phishing emails. One group was subjected to contextual training, another to game-based training, and a third was used as a control group. The maximum score was 11, and the contextual group scored the highest (10). The mean score for the game-based group was 9.09, while it was 8.82 for the control group. Only 7.7% of the participants scored 11 points, and all of those were in the contextual group. Two main conclusions be drawn from this research. First, contextual training is superior to the on-demand game-based training used in this experiment. Second, phishing detection is a tough challenge for users, even with training. View this paper.
  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Reader to open them.
Order results
Result details
Section
Select all
Export citation of selected articles as:
14 pages, 1814 KiB  
Article
Creating Honeypots to Prevent Online Child Exploitation
by Joel Scanlan, Paul A. Watters, Jeremy Prichard, Charlotte Hunn, Caroline Spiranovic and Richard Wortley
Future Internet 2022, 14(4), 121; https://doi.org/10.3390/fi14040121 - 14 Apr 2022
Cited by 3 | Viewed by 6532
Abstract
Honeypots have been a key tool in controlling and understanding digital crime for several decades. The tool has traditionally been deployed against actors who are attempting to hack into systems or as a discovery mechanism for new forms of malware. This paper presents [...] Read more.
Honeypots have been a key tool in controlling and understanding digital crime for several decades. The tool has traditionally been deployed against actors who are attempting to hack into systems or as a discovery mechanism for new forms of malware. This paper presents a novel approach to using a honeypot architecture in conjunction with social networks to respond to non-technical digital crimes. The tool is presented within the context of Child Exploitation Material (CEM), and to support the goal of taking an educative approach to Internet users who are developing an interest in this material. The architecture that is presented in the paper includes multiple layers, including recruitment, obfuscation, and education. The approach does not aim to collect data to support punitive action, but to educate users, increasing their knowledge and awareness of the negative impacts of such material. Full article
(This article belongs to the Special Issue Security and Community Detection in Social Network)
Show Figures

Figure 1

32 pages, 1245 KiB  
Article
Future Wireless Networking Experiments Escaping Simulations
by Sachin Sharma, Saish Urumkar, Gianluca Fontanesi, Byrav Ramamurthy and Avishek Nag
Future Internet 2022, 14(4), 120; https://doi.org/10.3390/fi14040120 - 14 Apr 2022
Cited by 4 | Viewed by 3240
Abstract
In computer networking, simulations are widely used to test and analyse new protocols and ideas. Currently, there are a number of open real testbeds available to test the new protocols. In the EU, for example, there are Fed4Fire testbeds, while in the US, [...] Read more.
In computer networking, simulations are widely used to test and analyse new protocols and ideas. Currently, there are a number of open real testbeds available to test the new protocols. In the EU, for example, there are Fed4Fire testbeds, while in the US, there are POWDER and COSMOS testbeds. Several other countries, including Japan, Brazil, India, and China, have also developed next-generation testbeds. Compared to simulations, these testbeds offer a more realistic way to test protocols and prototypes. In this paper, we examine some available wireless testbeds from the EU and the US, which are part of an open-call EU project under the NGIAtlantic H2020 initiative to conduct Software-Defined Networking (SDN) experiments on intelligent Internet of Things (IoT) networks. Furthermore, the paper presents benchmarking results and failure recovery results from each of the considered testbeds using a variety of wireless network topologies. The paper compares the testbeds based on throughput, latency, jitter, resources available, and failure recovery time, by sending different types of traffic. The results demonstrate the feasibility of performing wireless experiments on different testbeds in the US and the EU. Further, issues faced during experimentation on EU and US testbeds are also reported. Full article
Show Figures

Figure 1

11 pages, 1901 KiB  
Article
A Lightweight Certificateless Group Key Agreement Method without Pairing Based on Blockchain for Smart Grid
by Zhihao Wang, Ru Huo and Shuo Wang
Future Internet 2022, 14(4), 119; https://doi.org/10.3390/fi14040119 - 14 Apr 2022
Cited by 9 | Viewed by 2235
Abstract
In smart grids, the access verification of a large number of intelligent gateways and terminal devices has become one of the main concerns to ensure system security. This means that smart grids need a new key management method that is safe and efficient [...] Read more.
In smart grids, the access verification of a large number of intelligent gateways and terminal devices has become one of the main concerns to ensure system security. This means that smart grids need a new key management method that is safe and efficient and has a low computational cost. Although a large number of scholars have conducted relevant research, most of these schemes cannot balance the computational overhead and security. Therefore, we propose a lightweight and secure key management method, having a low computational overhead, based on blockchain for smart grids. Firstly, we redesigned the architecture of the smart grid based on blockchain and completed the division of various entities. Furthermore, we designed a pairing-free certification authenticated group key agreement method based on blockchain under the architecture. Finally, we achieved higher security attributes, and lower authentication delay and computational overhead, compared to the traditional schemes, as shown in performance analysis and comparison. Full article
Show Figures

Figure 1

46 pages, 2630 KiB  
Systematic Review
Deep Learning for Vulnerability and Attack Detection on Web Applications: A Systematic Literature Review
by Rokia Lamrani Alaoui and El Habib Nfaoui
Future Internet 2022, 14(4), 118; https://doi.org/10.3390/fi14040118 - 13 Apr 2022
Cited by 13 | Viewed by 6854
Abstract
Web applications are the best Internet-based solution to provide online web services, but they also bring serious security challenges. Thus, enhancing web applications security against hacking attempts is of paramount importance. Traditional Web Application Firewalls based on manual rules and traditional Machine Learning [...] Read more.
Web applications are the best Internet-based solution to provide online web services, but they also bring serious security challenges. Thus, enhancing web applications security against hacking attempts is of paramount importance. Traditional Web Application Firewalls based on manual rules and traditional Machine Learning need a lot of domain expertise and human intervention and have limited detection results faced with the increasing number of unknown web attacks. To this end, more research work has recently been devoted to employing Deep Learning (DL) approaches for web attacks detection. We performed a Systematic Literature Review (SLR) and quality analysis of 63 Primary Studies (PS) on DL-based web applications security published between 2010 and September 2021. We investigated the PS from different perspectives and synthesized the results of the analyses. To the best of our knowledge, this study is the first of its kind on SLR in this field. The key findings of our study include the following. (i) It is fundamental to generate standard real-world web attacks datasets to encourage effective contribution in this field and to reduce the gap between research and industry. (ii) It is interesting to explore some advanced DL models, such as Generative Adversarial Networks and variants of Encoders–Decoders, in the context of web attacks detection as they have been successful in similar domains such as networks intrusion detection. (iii) It is fundamental to bridge expertise in web applications security and expertise in Machine Learning to build theoretical Machine Learning models tailored for web attacks detection. (iv) It is important to create a corpus for web attacks detection in order to take full advantage of text mining in DL-based web attacks detection models construction. (v) It is essential to define a common framework for developing and comparing DL-based web attacks detection models. This SLR is intended to improve research work in the domain of DL-based web attacks detection, as it covers a significant number of research papers and identifies the key points that need to be addressed in this research field. Such a contribution is helpful as it allows researchers to compare existing approaches and to exploit the proposed future work opportunities. Full article
(This article belongs to the Topic Big Data and Artificial Intelligence)
Show Figures

Figure 1

35 pages, 13088 KiB  
Review
From 5G to 6G—Challenges, Technologies, and Applications
by Ahmed I. Salameh and Mohamed El Tarhuni
Future Internet 2022, 14(4), 117; https://doi.org/10.3390/fi14040117 - 12 Apr 2022
Cited by 44 | Viewed by 8608
Abstract
As the deployment of 5G mobile radio networks gains momentum across the globe, the wireless research community is already planning the successor of 5G. In this paper, we highlight the shortcomings of 5G in meeting the needs of more data-intensive, low-latency, and ultra-high-reliability [...] Read more.
As the deployment of 5G mobile radio networks gains momentum across the globe, the wireless research community is already planning the successor of 5G. In this paper, we highlight the shortcomings of 5G in meeting the needs of more data-intensive, low-latency, and ultra-high-reliability applications. We then discuss the salient characteristics of the 6G network following a hierarchical approach including the social, economic, and technological aspects. We also discuss some of the key technologies expected to support the move towards 6G. Finally, we quantify and summarize the research work related to beyond 5G and 6G networks through an extensive search of publications and research groups and present a possible timeline for 6G activities. Full article
Show Figures

Graphical abstract

28 pages, 1891 KiB  
Review
ML-Based 5G Network Slicing Security: A Comprehensive Survey
by Ramraj Dangi, Akshay Jadhav, Gaurav Choudhary, Nicola Dragoni, Manas Kumar Mishra and Praveen Lalwani
Future Internet 2022, 14(4), 116; https://doi.org/10.3390/fi14040116 - 08 Apr 2022
Cited by 34 | Viewed by 7883
Abstract
Fifth-generation networks efficiently support and fulfill the demands of mobile broadband and communication services. There has been a continuing advancement from 4G to 5G networks, with 5G mainly providing the three services of enhanced mobile broadband (eMBB), massive machine type communication (eMTC), and [...] Read more.
Fifth-generation networks efficiently support and fulfill the demands of mobile broadband and communication services. There has been a continuing advancement from 4G to 5G networks, with 5G mainly providing the three services of enhanced mobile broadband (eMBB), massive machine type communication (eMTC), and ultra-reliable low-latency services (URLLC). Since it is difficult to provide all of these services on a physical network, the 5G network is partitioned into multiple virtual networks called “slices”. These slices customize these unique services and enable the network to be reliable and fulfill the needs of its users. This phenomenon is called network slicing. Security is a critical concern in network slicing as adversaries have evolved to become more competent and often employ new attack strategies. This study focused on the security issues that arise during the network slice lifecycle. Machine learning and deep learning algorithm solutions were applied in the planning and design, construction and deployment, monitoring, fault detection, and security phases of the slices. This paper outlines the 5G network slicing concept, its layers and architectural framework, and the prevention of attacks, threats, and issues that represent how network slicing influences the 5G network. This paper also provides a comparison of existing surveys and maps out taxonomies to illustrate various machine learning solutions for different application parameters and network functions, along with significant contributions to the field. Full article
(This article belongs to the Section Network Virtualization and Edge/Fog Computing)
Show Figures

Figure 1

19 pages, 780 KiB  
Article
Ransomware-Resilient Self-Healing XML Documents
by Mahmoud Al-Dwairi, Ahmed S. Shatnawi, Osama Al-Khaleel and Basheer Al-Duwairi
Future Internet 2022, 14(4), 115; https://doi.org/10.3390/fi14040115 - 07 Apr 2022
Cited by 7 | Viewed by 2868
Abstract
In recent years, various platforms have witnessed an unprecedented increase in the number of ransomware attacks targeting hospitals, governments, enterprises, and end-users. The purpose of this is to maliciously encrypt documents and files on infected machines, depriving victims of access to their data, [...] Read more.
In recent years, various platforms have witnessed an unprecedented increase in the number of ransomware attacks targeting hospitals, governments, enterprises, and end-users. The purpose of this is to maliciously encrypt documents and files on infected machines, depriving victims of access to their data, whereupon attackers would seek some sort of a ransom in return for restoring access to the legitimate owners; hence the name. This cybersecurity threat would inherently cause substantial financial losses and time wastage for affected organizations and users. A great deal of research has taken place across academia and around the industry to combat this threat and mitigate its danger. These ongoing endeavors have resulted in several detection and prevention schemas. Nonetheless, these approaches do not cover all possible risks of losing data. In this paper, we address this facet and provide an efficient solution that would ensure an efficient recovery of XML documents from ransomware attacks. This paper proposes a self-healing version-aware ransomware recovery (SH-VARR) framework for XML documents. The proposed framework is based on the novel idea of using the link concept to maintain file versions in a distributed manner while applying access-control mechanisms to protect these versions from being encrypted or deleted. The proposed SH-VARR framework is experimentally evaluated in terms of storage overhead, time requirement, CPU utilization, and memory usage. Results show that the snapshot size increases proportionately with the original size; the time required is less than 120 ms for files that are less than 1 MB in size; and the highest CPU utilization occurs when using the bzip2. Moreover, when the zip and gzip are used, the memory usage is almost fixed (around 6.8 KBs). In contrast, it increases to around 28 KBs when the bzip2 is used. Full article
(This article belongs to the Topic Cyber Security and Critical Infrastructures)
Show Figures

Figure 1

19 pages, 3156 KiB  
Article
Interoperable Data Analytics Reference Architectures Empowering Digital-Twin-Aided Manufacturing
by Attila Csaba Marosi, Márk Emodi, Ákos Hajnal, Róbert Lovas, Tamás Kiss, Valerie Poser, Jibinraj Antony, Simon Bergweiler, Hamed Hamzeh, James Deslauriers and József Kovács
Future Internet 2022, 14(4), 114; https://doi.org/10.3390/fi14040114 - 06 Apr 2022
Cited by 6 | Viewed by 3480
Abstract
The use of mature, reliable, and validated solutions can save significant time and cost when introducing new technologies to companies. Reference Architectures represent such best-practice techniques and have the potential to increase the speed and reliability of the development process in many application [...] Read more.
The use of mature, reliable, and validated solutions can save significant time and cost when introducing new technologies to companies. Reference Architectures represent such best-practice techniques and have the potential to increase the speed and reliability of the development process in many application domains. One area where Reference Architectures are increasingly utilized is cloud-based systems. Exploiting the high-performance computing capability offered by clouds, while keeping sovereignty and governance of proprietary information assets can be challenging. This paper explores how Reference Architectures can be applied to overcome this challenge when developing cloud-based applications. The presented approach was developed within the DIGITbrain European project, which aims at supporting small and medium-sized enterprises (SMEs) and mid-caps in realizing smart business models called Manufacturing as a Service, via the efficient utilization of Digital Twins. In this paper, an overview of Reference Architecture concepts, as well as their classification, specialization, and particular application possibilities are presented. Various data management and potentially spatially detached data processing configurations are discussed, with special attention to machine learning techniques, which are of high interest within various sectors, including manufacturing. A framework that enables the deployment and orchestration of such overall data analytics Reference Architectures in clouds resources is also presented, followed by a demonstrative application example where the applicability of the introduced techniques and solutions are showcased in practice. Full article
(This article belongs to the Special Issue Big Data Analytics, Privacy and Visualization)
Show Figures

Figure 1

20 pages, 420 KiB  
Article
Multi-Layer Feature Fusion-Based Community Evolution Prediction
by Zhao Wang, Qingguo Xu and Weimin Li
Future Internet 2022, 14(4), 113; https://doi.org/10.3390/fi14040113 - 06 Apr 2022
Cited by 1 | Viewed by 1901
Abstract
Analyzing and predicting community evolution has many important applications in criminology, sociology, and other fields. In community evolution prediction, most of the existing research is simply calculating the features of the community, and then predicting the evolution event through the classifier. However, these [...] Read more.
Analyzing and predicting community evolution has many important applications in criminology, sociology, and other fields. In community evolution prediction, most of the existing research is simply calculating the features of the community, and then predicting the evolution event through the classifier. However, these methods do not consider the complex characteristics of community evolution, and only predict the community’s evolution from a single level. To solve these problems, this paper proposes an algorithm called multi-layer feature fusion-based community evolution prediction, which obtains features from the community layer and node layer. The final community feature is the fusion of the two layer features. At the node layer, this paper proposes a global and local-based role-extraction algorithm. This algorithm can effectively discover different roles in the community. In this way, we can distinguish the influence of nodes with different characteristics on the community evolution. At the community layer, this paper proposes to use the community hypergraph to obtain the inter-community interaction relationship. After all the features are obtained, this paper trains a classifier through these features and uses them in community evolution prediction. The experimental results show that the algorithm proposed in this paper is better than other algorithms in terms of prediction effect. Full article
(This article belongs to the Section Big Data and Augmented Intelligence)
Show Figures

Figure 1

22 pages, 3697 KiB  
Article
HealthFetch: An Influence-Based, Context-Aware Prefetch Scheme in Citizen-Centered Health Storage Clouds
by Chrysostomos Symvoulidis, George Marinos, Athanasios Kiourtis, Argyro Mavrogiorgou and Dimosthenis Kyriazis
Future Internet 2022, 14(4), 112; https://doi.org/10.3390/fi14040112 - 01 Apr 2022
Cited by 4 | Viewed by 2834
Abstract
Over the past few years, increasing attention has been given to the health sector and the integration of new technologies into it. Cloud computing and storage clouds have become essentially state of the art solutions for other major areas and have started to [...] Read more.
Over the past few years, increasing attention has been given to the health sector and the integration of new technologies into it. Cloud computing and storage clouds have become essentially state of the art solutions for other major areas and have started to rapidly make their presence powerful in the health sector as well. More and more companies are working toward a future that will allow healthcare professionals to engage more with such infrastructures, enabling them a vast number of possibilities. While this is a very important step, less attention has been given to the citizens. For this reason, in this paper, a citizen-centered storage cloud solution is proposed that will allow citizens to hold their health data in their own hands while also enabling the exchange of these data with healthcare professionals during emergency situations. Not only that, in order to reduce the health data transmission delay, a novel context-aware prefetch engine enriched with deep learning capabilities is proposed. The proposed prefetch scheme, along with the proposed storage cloud, is put under a two-fold evaluation in several deployment and usage scenarios in order to examine its performance with respect to the data transmission times, while also evaluating its outcomes compared to other state of the art solutions. The results show that the proposed solution shows significant improvement of the download speed when compared with the storage cloud, especially when large data are exchanged. In addition, the results of the proposed scheme evaluation depict that the proposed scheme improves the overall predictions, considering the coefficient of determination (R2 > 0.94) and the mean of errors (RMSE < 1), while also reducing the training data by 12%. Full article
(This article belongs to the Topic Big Data and Artificial Intelligence)
Show Figures

Figure 1

22 pages, 3917 KiB  
Article
Location Transparency Call (LTC) System: An Intelligent Phone Dialing System Based on the Phone of Things (PoT) Architecture
by Haytham Khalil and Khalid Elgazzar
Future Internet 2022, 14(4), 111; https://doi.org/10.3390/fi14040111 - 31 Mar 2022
Cited by 2 | Viewed by 3087
Abstract
Phone of Things (PoT) extends the connectivity options for IoT systems by leveraging the ubiquitous phone network infrastructure, making it part of the IoT architecture. PoT enriches the connectivity options of IoT while promoting its affordability, accessibility, security, and scalability. PoT enables incentive [...] Read more.
Phone of Things (PoT) extends the connectivity options for IoT systems by leveraging the ubiquitous phone network infrastructure, making it part of the IoT architecture. PoT enriches the connectivity options of IoT while promoting its affordability, accessibility, security, and scalability. PoT enables incentive IoT applications that can result in more innovative homes, office environments, and telephony solutions. This paper presents the Location Transparency Call (LTC) system, an intelligent phone dialing system for businesses based on the PoT architecture. The LTC system intelligently mitigates the impact of missed calls on companies and provides high availability and dynamic reachability to employees within the premises. LTC automatically forwards calls to the intended employees to the closest phone extensions at their current locations. Location transparency is achieved by actively maintaining and dynamically updating a real-time database that maps the persons’ locations using the RFID tags they carry. We demonstrate the system’s feasibility and usability and evaluate its performance through a fully-fledged prototype representing its hardware and software components that can be applied in real situations at large scale. Full article
(This article belongs to the Special Issue Edge Computing for Internet of Things and Cyber-Physical Systems)
Show Figures

Figure 1

14 pages, 1216 KiB  
Article
Decorrelation-Based Deep Learning for Bias Mitigation
by Pranita Patil and Kevin Purcell
Future Internet 2022, 14(4), 110; https://doi.org/10.3390/fi14040110 - 29 Mar 2022
Cited by 2 | Viewed by 3693
Abstract
Although deep learning has proven to be tremendously successful, the main issue is the dependency of its performance on the quality and quantity of training datasets. Since the quality of data can be affected by biases, a novel deep learning method based on [...] Read more.
Although deep learning has proven to be tremendously successful, the main issue is the dependency of its performance on the quality and quantity of training datasets. Since the quality of data can be affected by biases, a novel deep learning method based on decorrelation is presented in this study. The decorrelation specifically learns bias invariant features by reducing the non-linear statistical dependency between features and bias itself. This makes the deep learning models less prone to biased decisions by addressing data bias issues. We introduce Decorrelated Deep Neural Networks (DcDNN) or Decorrelated Convolutional Neural Networks (DcCNN) and Decorrelated Artificial Neural Networks (DcANN) by applying decorrelation-based optimization to Deep Neural Networks (DNN) and Artificial Neural Networks (ANN), respectively. Previous bias mitigation methods result in a drastic loss in accuracy at the cost of bias reduction. Our study aims to resolve this by controlling how strongly the decorrelation function for bias reduction and loss function for accuracy affect the network objective function. The detailed analysis of the hyperparameter shows that for the optimal value of hyperparameter, our model is capable of maintaining accuracy while being bias invariant. The proposed method is evaluated on several benchmark datasets with different types of biases such as age, gender, and color. Additionally, we test our approach along with traditional approaches to analyze the bias mitigation in deep learning. Using simulated datasets, the results of t-distributed stochastic neighbor embedding (t-SNE) of the proposed model validated the effective removal of bias. An analysis of fairness metrics and accuracy comparisons shows that using our proposed models reduces the biases without compromising accuracy significantly. Furthermore, the comparison of our method with existing methods shows the superior performance of our model in terms of bias mitigation, as well as simplicity of training. Full article
(This article belongs to the Section Big Data and Augmented Intelligence)
Show Figures

Figure 1

25 pages, 3216 KiB  
Article
TalkRoBots: A Middleware for Robotic Systems in Industry 4.0
by Marwane Ayaida, Nadhir Messai, Frederic Valentin and Dimitri Marcheras
Future Internet 2022, 14(4), 109; https://doi.org/10.3390/fi14040109 - 29 Mar 2022
Cited by 3 | Viewed by 2650
Abstract
This paper proposes a middleware called TalkRoBots that handles interoperability issues, which could be encountered in Industry 4.0. The latter proposes a unified communication approach facilitating the collaboration between heterogeneous equipment without needing to change neither the already used software nor the existing [...] Read more.
This paper proposes a middleware called TalkRoBots that handles interoperability issues, which could be encountered in Industry 4.0. The latter proposes a unified communication approach facilitating the collaboration between heterogeneous equipment without needing to change neither the already used software nor the existing hardware. It allows heterogeneous robots, using both open and proprietary robotic frameworks (i.e., ROS, ABB, Universal Robots, etc.), to communicate and to share information in a transparent manner. It allows robots and Industrial Internet of Things (IIoT) devices to communicate together. Furthermore, a resilience mechanism based on an Artificial Intelligence (AI) approach was designed in order to allow automatically replacing a defective robot with an optimal alternatively available robot. Finally, a remote interface, which could be run through the Cloud, allows users to manipulate fleets of robots from anywhere and to obtain access to sensors’ data. A practical scenario using five different robots has been realized to demonstrate the different possibilities. This demonstrates the cost effectiveness of our middleware in terms of its impacts on the communication network. Finally, a simulation study that evaluates the scalability of our middleware clearly shows that TalkRoBots can be used efficiently in industrial scenarios involving a huge number of heterogeneous robots and IIoT devices. Full article
(This article belongs to the Special Issue AI-Empowered Future Networks)
Show Figures

Figure 1

18 pages, 3711 KiB  
Article
Adaptative Perturbation Patterns: Realistic Adversarial Learning for Robust Intrusion Detection
by João Vitorino, Nuno Oliveira and Isabel Praça
Future Internet 2022, 14(4), 108; https://doi.org/10.3390/fi14040108 - 29 Mar 2022
Cited by 13 | Viewed by 7924
Abstract
Adversarial attacks pose a major threat to machine learning and to the systems that rely on it. In the cybersecurity domain, adversarial cyber-attack examples capable of evading detection are especially concerning. Nonetheless, an example generated for a domain with tabular data must be [...] Read more.
Adversarial attacks pose a major threat to machine learning and to the systems that rely on it. In the cybersecurity domain, adversarial cyber-attack examples capable of evading detection are especially concerning. Nonetheless, an example generated for a domain with tabular data must be realistic within that domain. This work establishes the fundamental constraint levels required to achieve realism and introduces the adaptative perturbation pattern method (A2PM) to fulfill these constraints in a gray-box setting. A2PM relies on pattern sequences that are independently adapted to the characteristics of each class to create valid and coherent data perturbations. The proposed method was evaluated in a cybersecurity case study with two scenarios: Enterprise and Internet of Things (IoT) networks. Multilayer perceptron (MLP) and random forest (RF) classifiers were created with regular and adversarial training, using the CIC-IDS2017 and IoT-23 datasets. In each scenario, targeted and untargeted attacks were performed against the classifiers, and the generated examples were compared with the original network traffic flows to assess their realism. The obtained results demonstrate that A2PM provides a scalable generation of realistic adversarial examples, which can be advantageous for both adversarial training and attacks. Full article
(This article belongs to the Topic Cyber Security and Critical Infrastructures)
Show Figures

Graphical abstract

28 pages, 2237 KiB  
Article
A Multi-Service Adaptive Semi-Persistent LTE Uplink Scheduler for Low Power M2M Devices
by Nusrat Afrin, Jason Brown and Jamil Y. Khan
Future Internet 2022, 14(4), 107; https://doi.org/10.3390/fi14040107 - 27 Mar 2022
Cited by 2 | Viewed by 2195
Abstract
The prominence of Machine-to-Machine (M2M) communications in the future wide area communication networks place various challenges to the cellular technologies such as the Long Term Evolution (LTE) standard, owing to the large number of M2M devices generating small bursts of infrequent data packets [...] Read more.
The prominence of Machine-to-Machine (M2M) communications in the future wide area communication networks place various challenges to the cellular technologies such as the Long Term Evolution (LTE) standard, owing to the large number of M2M devices generating small bursts of infrequent data packets with a wide range of delay requirements. The channel structure and Quality of Service (QoS) framework of LTE networks fail to support M2M traffic with multiple burst sizes and QoS requirements while a bottleneck often arises from the limited control resources to communicate future uplink resource allocations to the M2M devices. Moreover, many of the M2M devices are battery-powered and require a low-power consuming wide area technology for wide-spread deployments. To alleviate these issues, in this article we propose an adaptive semi-persistent scheduling (SPS) scheme for the LTE uplink which caters for multi-service M2M traffic classes with variable burst sizes and delay tolerances. Instead of adhering to the rigid LTE QoS framework, the proposed algorithm supports variation of uplink allocation sizes based on queued data length yet does not require control signaling to inform those allocations to the respective devices. Both the eNodeB and the M2M devices can determine the precise uplink resource allocation related parameters based on their mutual knowledge, thus omitting the burden of regular control signaling exchanges. Based on a control parameter, the algorithm can offer different capacities and levels of QoS satisfaction to different traffic classes. We also introduce a pre-emptive feature by which the algorithm can prioritize new traffic with low delay tolerance over ongoing delay-tolerant traffic. We also build a model for incorporating the Discontinuous Reception (DRX) mechanism in synchronization with the adaptive SPS transmissions so that the UE power consumption can be significantly lowered, thereby extending their battery lives. The simulation and performance analysis of the proposed scheme shows significant improvement over the traditional LTE scheduler in terms of QoS satisfaction, channel utilization and low power requirements of multi-service M2M traffic. Full article
(This article belongs to the Special Issue AI, Machine Learning and Data Analytics for Wireless Communications)
Show Figures

Figure 1

3 pages, 179 KiB  
Editorial
Special Issue “Natural Language Engineering: Methods, Tasks and Applications”
by Massimo Esposito, Giovanni Luca Masala, Aniello Minutolo and Marco Pota
Future Internet 2022, 14(4), 106; https://doi.org/10.3390/fi14040106 - 26 Mar 2022
Viewed by 1986
Abstract
Natural language engineering includes a continuously enlarging variety of methods for solving natural language processing (NLP) tasks within a pervasive number of applications [...] Full article
(This article belongs to the Special Issue Natural Language Engineering: Methods, Tasks and Applications)
15 pages, 13681 KiB  
Article
A Dynamic Cache Allocation Mechanism (DCAM) for Reliable Multicast in Information-Centric Networking
by Yingjie Duan, Hong Ni and Xiaoyong Zhu
Future Internet 2022, 14(4), 105; https://doi.org/10.3390/fi14040105 - 25 Mar 2022
Viewed by 2336
Abstract
As a new network architecture, information-centric networking (ICN) decouples the identifiers and locators of network entities and makes full use of in-network cache technology to improve the content distribution efficiency. For reliable multicast, ICN in-network cache can help reduce the loss recovery delay. [...] Read more.
As a new network architecture, information-centric networking (ICN) decouples the identifiers and locators of network entities and makes full use of in-network cache technology to improve the content distribution efficiency. For reliable multicast, ICN in-network cache can help reduce the loss recovery delay. However, with the development of applications and services, a multicast tree node often serves multiple reliable multicast groups. How to reasonably allocate cache resources for each multicast group will greatly affect the performance of reliable multicast. In order to improve the overall loss recovery performance of reliable multicast, this paper designs a dynamic cache allocation mechanism (DCAM). DCAM considers the packet loss probability, the node depth of the multicast tree, and the multicast transmission rate of multicast group, and then allocates cache space for multicast group based on the normalized cache quota weight. We also explore the performance of three cache allocation mechanisms (DCAM, AARM, and Equal) combined with four cache strategies (LCE, CAPC, Prob, and ProbCache), respectively. Experimental results show that DCAM can adjust cache allocation results in time according to network changes, and its combinations with various cache strategies outperform other combinations. Moreover, the combination of DCAM and CAPC can achieve optimal performance in loss recovery delay, cache hit ratio, transmission completion time, and overhead. Full article
(This article belongs to the Special Issue 5G Wireless Communication Networks)
Show Figures

Figure 1

16 pages, 1422 KiB  
Article
Evaluation of Contextual and Game-Based Training for Phishing Detection
by Joakim Kävrestad, Allex Hagberg, Marcus Nohlberg, Jana Rambusch, Robert Roos and Steven Furnell
Future Internet 2022, 14(4), 104; https://doi.org/10.3390/fi14040104 - 25 Mar 2022
Cited by 10 | Viewed by 5090
Abstract
Cybersecurity is a pressing matter, and a lot of the responsibility for cybersecurity is put on the individual user. The individual user is expected to engage in secure behavior by selecting good passwords, identifying malicious emails, and more. Typical support for users comes [...] Read more.
Cybersecurity is a pressing matter, and a lot of the responsibility for cybersecurity is put on the individual user. The individual user is expected to engage in secure behavior by selecting good passwords, identifying malicious emails, and more. Typical support for users comes from Information Security Awareness Training (ISAT), which makes the effectiveness of ISAT a key cybersecurity issue. This paper presents an evaluation of how two promising methods for ISAT support users in acheiving secure behavior using a simulated experiment with 41 participants. The methods were game-based training, where users learn by playing a game, and Context-Based Micro-Training (CBMT), where users are presented with short information in a situation where the information is of direct relevance. Participants were asked to identify phishing emails while their behavior was monitored using eye-tracking technique. The research shows that both training methods can support users towards secure behavior and that CBMT does so to a higher degree than game-based training. The research further shows that most participants were susceptible to phishing, even after training, which suggests that training alone is insufficient to make users behave securely. Consequently, future research ideas, where training is combined with other support systems, are proposed. Full article
(This article belongs to the Topic Cyber Security and Critical Infrastructures)
Show Figures

Figure 1

17 pages, 9315 KiB  
Article
Cross-Domain Transfer Learning Prediction of COVID-19 Popular Topics Based on Knowledge Graph
by Xiaolin Chen, Qixing Qu, Chengxi Wei and Shudong Chen
Future Internet 2022, 14(4), 103; https://doi.org/10.3390/fi14040103 - 24 Mar 2022
Viewed by 2543
Abstract
The significance of research on public opinion monitoring of social network emergencies is becoming increasingly important. As a platform for users to communicate and share information online, social networks are often the source of public opinion about emergencies. Considering the relevance and transmissibility [...] Read more.
The significance of research on public opinion monitoring of social network emergencies is becoming increasingly important. As a platform for users to communicate and share information online, social networks are often the source of public opinion about emergencies. Considering the relevance and transmissibility of the same event in different social networks, this paper takes the COVID-19 outbreak as the background and selects the platforms Weibo and TikTok as the research objects. In this paper, first, we use the transfer learning model to apply the knowledge obtained in the source domain of Weibo to the target domain of TikTok. From the perspective of text information, we propose an improved TC-LDA model to measure the similarity between the two domains, including temporal similarity and conceptual similarity, which effectively improves the learning effect of instance transfer and makes up for the problem of insufficient sample data in the target domain. Then, based on the results of transfer learning, we use the improved single-pass incremental clustering algorithm to discover and filter popular topics in streaming data of social networks. Finally, we build a topic knowledge graph using the Neo4j graph database and conduct experiments to predict the evolution of popular topics in new emergencies. Our research results can provide a reference for public opinion monitoring and early warning of emergencies in government departments. Full article
(This article belongs to the Special Issue Knowledge Graph Mining and Its Applications)
Show Figures

Figure 1

17 pages, 677 KiB  
Article
Detecting IoT Attacks Using an Ensemble Machine Learning Model
by Vikas Tomer and Sachin Sharma
Future Internet 2022, 14(4), 102; https://doi.org/10.3390/fi14040102 - 24 Mar 2022
Cited by 21 | Viewed by 5972
Abstract
Malicious attacks are becoming more prevalent due to the growing use of Internet of Things (IoT) devices in homes, offices, transportation, healthcare, and other locations. By incorporating fog computing into IoT, attacks can be detected in a short amount of time, as the [...] Read more.
Malicious attacks are becoming more prevalent due to the growing use of Internet of Things (IoT) devices in homes, offices, transportation, healthcare, and other locations. By incorporating fog computing into IoT, attacks can be detected in a short amount of time, as the distance between IoT devices and fog devices is smaller than the distance between IoT devices and the cloud. Machine learning is frequently used for the detection of attacks due to the huge amount of data available from IoT devices. However, the problem is that fog devices may not have enough resources, such as processing power and memory, to detect attacks in a timely manner. This paper proposes an approach to offload the machine learning model selection task to the cloud and the real-time prediction task to the fog nodes. Using the proposed method, based on historical data, an ensemble machine learning model is built in the cloud, followed by the real-time detection of attacks on fog nodes. The proposed approach is tested using the NSL-KDD dataset. The results show the effectiveness of the proposed approach in terms of several performance measures, such as execution time, precision, recall, accuracy, and ROC (receiver operating characteristic) curve. Full article
Show Figures

Figure 1

16 pages, 3603 KiB  
Article
Performance Analysis of DF Relay-Assisted D2D Communication in a 5G mmWave Network
by Subhra Sankha Sarma, Ranjay Hazra and Peter Han Joo Chong
Future Internet 2022, 14(4), 101; https://doi.org/10.3390/fi14040101 - 24 Mar 2022
Cited by 2 | Viewed by 2551
Abstract
Enabling D2D communication in the mmWave band has many obstacles that must be mitigated. The primary concern is the introduction of interference from various sources. Thus, we focused our work on the performance of decode-and-forward (DF) relay-assisted D2D communication in the mmWave band [...] Read more.
Enabling D2D communication in the mmWave band has many obstacles that must be mitigated. The primary concern is the introduction of interference from various sources. Thus, we focused our work on the performance of decode-and-forward (DF) relay-assisted D2D communication in the mmWave band to increase the coverage probability and energy efficiency (EE). Three modes are proposed for D2D communication to prevail. The bitwise binary XOR operation was executed at the relay node, which increased the security feature. The radius of coverage was derived, which indicated the switching of the modes. The diffused incoherent scattering power was also considered as part of the power consumption. Furthermore, a unique relay selection scheme, the dynamic relay selection (DRS) method, is proposed to select the optimal relay for information exchange. A comparison of the proposed DF relay scheme with the amplify-and-forward (AF) scheme was also made. Finally, the simulation results proved the efficacy of the proposed work. Full article
(This article belongs to the Special Issue 6G Wireless Channel Measurements and Models: Trends and Challenges)
Show Figures

Figure 1

17 pages, 1532 KiB  
Article
Using Satellite Imagery to Improve Local Pollution Models for High-Voltage Transmission Lines and Insulators
by Peter Krammer, Marcel Kvassay, Ján Mojžiš, Martin Kenyeres, Miloš Očkay, Ladislav Hluchý, Ľuboš Pavlov and Ľuboš Skurčák
Future Internet 2022, 14(4), 99; https://doi.org/10.3390/fi14040099 - 23 Mar 2022
Cited by 7 | Viewed by 2465
Abstract
This paper addresses the regression modeling of local environmental pollution levels for electric power industry needs, which is fundamental for the proper design and maintenance of high-voltage transmission lines and insulators in order to prevent various hazards, such as accidental flashovers due to [...] Read more.
This paper addresses the regression modeling of local environmental pollution levels for electric power industry needs, which is fundamental for the proper design and maintenance of high-voltage transmission lines and insulators in order to prevent various hazards, such as accidental flashovers due to pollution and the resultant power outages. The primary goal of our study was to increase the precision of regression models for this application area by exploiting additional input attributes extracted from satellite imagery and adjusting the modeling methodology. Given that thousands of different attributes can be extracted from satellite images, of which only a few are likely to contain useful information, we also explored suitable feature selection procedures. We show that a suitable combination of attribute selection methods (relief, FSRF-Test, and forward selection), regression models (random forest models and M5P regression trees), and modeling methodology (estimating field-measured values of target variables rather than their upper bounds) can significantly increase the total modeling accuracy, measured by the correlation between the estimated and the true values of target variables. Specifically, the accuracies of our regression models dramatically rose from 0.12–0.23 to 0.40–0.64, while their relative absolute errors were conversely reduced (e.g., from 1.04 to 0.764 for the best model). Full article
Show Figures

Graphical abstract

16 pages, 3937 KiB  
Article
Deep Regression Neural Networks for Proportion Judgment
by Mario Milicevic, Vedran Batos, Adriana Lipovac and Zeljka Car
Future Internet 2022, 14(4), 100; https://doi.org/10.3390/fi14040100 - 23 Mar 2022
Cited by 2 | Viewed by 2670
Abstract
Deep regression models are widely employed to solve computer vision tasks, such as human age or pose estimation, crowd counting, object detection, etc. Another possible area of application, which to our knowledge has not been systematically explored so far, is proportion judgment. As [...] Read more.
Deep regression models are widely employed to solve computer vision tasks, such as human age or pose estimation, crowd counting, object detection, etc. Another possible area of application, which to our knowledge has not been systematically explored so far, is proportion judgment. As a prerequisite for successful decision making, individuals often have to use proportion judgment strategies, with which they estimate the magnitude of one stimulus relative to another (larger) stimulus. This makes this estimation problem interesting for the application of machine learning techniques. In regard to this, we proposed various deep regression architectures, which we tested on three original datasets of very different origin and composition. This is a novel approach, as the assumption is that the model can learn the concept of proportion without explicitly counting individual objects. With comprehensive experiments, we have demonstrated the effectiveness of the proposed models which can predict proportions on real-life datasets more reliably than human experts, considering the coefficient of determination (>0.95) and the amount of errors (MAE < 2, RMSE < 3). If there is no significant number of errors in determining the ground truth, with an appropriate size of the learning dataset, an additional reduction of MAE to 0.14 can be achieved. The used datasets will be publicly available to serve as reference data sources in similar projects. Full article
Show Figures

Graphical abstract

24 pages, 412 KiB  
Article
Bitcoin as a Safe Haven during COVID-19 Disease
by Luisanna Cocco, Roberto Tonelli and Michele Marchesi
Future Internet 2022, 14(4), 98; https://doi.org/10.3390/fi14040098 - 22 Mar 2022
Cited by 3 | Viewed by 3126
Abstract
In this paper, we investigate the role of Bitcoin as a safe haven against the stock market losses during the spread of COVID-19. The performed analysis was based on a regression model with dummy variables defined around some crucial dates of the pandemic [...] Read more.
In this paper, we investigate the role of Bitcoin as a safe haven against the stock market losses during the spread of COVID-19. The performed analysis was based on a regression model with dummy variables defined around some crucial dates of the pandemic and on the dynamic conditional correlations. To try to model the real dynamics of the markets, we studied the safe-haven properties of Bitcoin against thirteen of the major stock market indexes losses using daily data spanning from 1 July 2019 until 20 February 2021. A similar analysis was also performed for Ether. Results show that this pandemic impacts on the Bitcoin status as safe haven, but we are still far from being able to define Bitcoin as a safe haven. Full article
Show Figures

Figure 1

16 pages, 2178 KiB  
Article
Predicting Dog Emotions Based on Posture Analysis Using DeepLabCut
by Kim Ferres, Timo Schloesser and Peter A. Gloor
Future Internet 2022, 14(4), 97; https://doi.org/10.3390/fi14040097 - 22 Mar 2022
Cited by 16 | Viewed by 10833
Abstract
This paper describes an emotion recognition system for dogs automatically identifying the emotions anger, fear, happiness, and relaxation. It is based on a previously trained machine learning model, which uses automatic pose estimation to differentiate emotional states of canines. Towards that goal, we [...] Read more.
This paper describes an emotion recognition system for dogs automatically identifying the emotions anger, fear, happiness, and relaxation. It is based on a previously trained machine learning model, which uses automatic pose estimation to differentiate emotional states of canines. Towards that goal, we have compiled a picture library with full body dog pictures featuring 400 images with 100 samples each for the states “Anger”, “Fear”, “Happiness” and “Relaxation”. A new dog keypoint detection model was built using the framework DeepLabCut for animal keypoint detector training. The newly trained detector learned from a total of 13,809 annotated dog images and possesses the capability to estimate the coordinates of 24 different dog body part keypoints. Our application is able to determine a dog’s emotional state visually with an accuracy between 60% and 70%, exceeding human capability to recognize dog emotions. Full article
(This article belongs to the Collection Machine Learning Approaches for User Identity)
Show Figures

Figure 1

Previous Issue
Next Issue
Back to TopTop