Next Issue
Volume 12, April
Previous Issue
Volume 12, February
 
 

Future Internet, Volume 12, Issue 3 (March 2020) – 19 articles

Cover Story (view full-size image): Low-power wide area networks (LPWANs) are a promising solution for long-range and low-power Internet of Things (IoT) and machine to machine (M2M) communication applications. This paper is primarily focused on defining a systematic and powerful approach of identifying the key characteristics of LPWAN applications, translating them into explicit requirements, and then deriving the associated design considerations for present and future technological candidates. The paper also discusses the emerging and intelligent applications of LPWANs and their characteristics and describes and categorizes various wireless access technologies, LPWAN topologies, architectures, and technological candidates. View this paper
  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Reader to open them.
Order results
Result details
Section
Select all
Export citation of selected articles as:
15 pages, 716 KiB  
Review
A Systematic Review of Blockchain Literature in Logistics and Supply Chain Management: Identifying Research Questions and Future Directions
by Sebastian Kummer, David M. Herold, Mario Dobrovnik, Jasmin Mikl and Nicole Schäfer
Future Internet 2020, 12(3), 60; https://doi.org/10.3390/fi12030060 - 23 Mar 2020
Cited by 63 | Viewed by 12140
Abstract
Potential blockchain applications in logistics and transport (LSCM) have gained increasing attention within both academia and industry. However, as a field in its infancy, blockchain research often lacks theoretical foundations, and it is not clear which and to what extent organizational theories are [...] Read more.
Potential blockchain applications in logistics and transport (LSCM) have gained increasing attention within both academia and industry. However, as a field in its infancy, blockchain research often lacks theoretical foundations, and it is not clear which and to what extent organizational theories are used to investigate blockchain technology in the field of LSCM. In response, based upon a systematic literature review, this paper: (a) identifies the most relevant organizational theories used in blockchain literature in the context of LSCM; and (b) examines the content of the identified organizational theories to formulate relevant research questions for investigating blockchain technology in LSCM. Our results show that blockchain literature in LSCM is based around six organizational theories, namely: agency theory, information theory, institutional theory, network theory, the resource-based view and transaction cost analysis. We also present how these theories can be used to examine specific blockchain problems by identifying blockchain-specific research questions that are worthy of investigation. Full article
Show Figures

Figure 1

19 pages, 828 KiB  
Article
Do Cryptocurrency Prices Camouflage Latent Economic Effects? A Bayesian Hidden Markov Approach
by Constandina Koki, Stefanos Leonardos and Georgios Piliouras
Future Internet 2020, 12(3), 59; https://doi.org/10.3390/fi12030059 - 21 Mar 2020
Cited by 4 | Viewed by 4244
Abstract
We study the Bitcoin and Ether price series under a financial perspective. Specifically, we use two econometric models to perform a two-layer analysis to study the correlation and prediction of Bitcoin and Ether price series with traditional assets. In the first part of [...] Read more.
We study the Bitcoin and Ether price series under a financial perspective. Specifically, we use two econometric models to perform a two-layer analysis to study the correlation and prediction of Bitcoin and Ether price series with traditional assets. In the first part of this study, we model the probability of positive returns via a Bayesian logistic model. Even though the fitting performance of the logistic model is poor, we find that traditional assets can explain some of the variability of the price returns. Along with the fact that standard models fail to capture the statistic and econometric attributes—such as extreme variability and heteroskedasticity—of cryptocurrencies, this motivates us to apply a novel Non-Homogeneous Hidden Markov model to these series. In particular, we model Bitcoin and Ether prices via the non-homogeneous Pólya-Gamma Hidden Markov (NHPG) model, since it has been shown that it outperforms its counterparts in conventional financial data. The transition probabilities of the underlying hidden process are modeled via a logistic link whereas the observed series follow a mixture of normal regressions conditionally on the hidden process. Our results show that the NHPG algorithm has good in-sample performance and captures the heteroskedasticity of both series. It identifies frequent changes between the two states of the underlying Markov process. In what constitutes the most important implication of our study, we show that there exist linear correlations between the covariates and the ETH and BTC series. However, only the ETH series are affected non-linearly by a subset of the accounted covariates. Finally, we conclude that the large number of significant predictors along with the weak degree of predictability performance of the algorithm back up earlier findings that cryptocurrencies are unlike any other financial assets and predicting the cryptocurrency price series is still a challenging task. These findings can be useful to investors, policy makers, traders for portfolio allocation, risk management and trading strategies. Full article
Show Figures

Figure 1

16 pages, 1050 KiB  
Article
Graph-based Method for App Usage Prediction with Attributed Heterogeneous Network Embedding
by Yifei Zhou, Shaoyong Li and Yaping Liu
Future Internet 2020, 12(3), 58; https://doi.org/10.3390/fi12030058 - 20 Mar 2020
Cited by 6 | Viewed by 4240
Abstract
Smartphones and applications have become widespread more and more. Thus, using the hardware and software of users’ mobile phones, we can get a large amount of personal data, in which a large part is about the user’s application usage patterns. By transforming and [...] Read more.
Smartphones and applications have become widespread more and more. Thus, using the hardware and software of users’ mobile phones, we can get a large amount of personal data, in which a large part is about the user’s application usage patterns. By transforming and extracting these data, we can get user preferences, and provide personalized services and improve the experience for users. In a detailed way, studying application usage pattern benefits a variety of advantages such as precise bandwidth allocation, App launch acceleration, etc. However, the first thing to achieve the above advantages is to predict the next application accurately. In this paper, we propose AHNEAP, a novel network embedding based framework for predicting the next App to be used by characterizing the context information before one specific App being launched. AHNEAP transforms the historical App usage records in physical spaces to a large attributed heterogeneous network which contains three node types, three edges, and several attributes like App type, the day of the week. Then, the representation learning process is conducted. Finally, the App usage prediction problem was defined as a link prediction problem, realized by a simple neural network. Experiments on the LiveLab project dataset demonstrate the effectiveness of our framework which outperforms the three baseline methods for each tested user. Full article
(This article belongs to the Section Big Data and Augmented Intelligence)
Show Figures

Figure 1

37 pages, 3901 KiB  
Article
Considered Factors of Online News Based on Respondents’ Eye Activity Using Eye-Tracker Analysis
by Daniel Hadrian Yohandy, Djoko Budiyanto Setyohadi and Albertus Joko Santoso
Future Internet 2020, 12(3), 57; https://doi.org/10.3390/fi12030057 - 20 Mar 2020
Cited by 3 | Viewed by 4662
Abstract
Development of the internet as a source of information has penetrated many aspects of human life, which is shown in the increasingly diverse substance of news in online news sources. Previous studies have stated that the presentation of the substance of online news [...] Read more.
Development of the internet as a source of information has penetrated many aspects of human life, which is shown in the increasingly diverse substance of news in online news sources. Previous studies have stated that the presentation of the substance of online news information can have negative impacts, especially the emergence of anxiety in users; thus, managing the presentation of information becomes important. This study intends to explore factors that should be considered as possible anxiety-inducers for readers of news sites. Analyses of areas of interest (AOIs), fixation, and heat maps from respondents’ eye activity obtained from eye-tracker data have been compiled with Beck Anxiety Inventory (BAI) measurement results to analyze anxiety among newsreaders. The results show that text is the dominant center of attention in various types of news. The reason for the higher anxiety that arises from text on online news sites is twofold. First, there are the respondents’ experiences. Second, text usage allows for boundless possibilities in respondents’ imaginations as a response to the news that has occurred. Full article
Show Figures

Figure 1

19 pages, 4211 KiB  
Article
Development of User-Participatory Crowdsensing System for Improved Privacy Preservation
by Mihui Kim and Junhyeok Yun
Future Internet 2020, 12(3), 56; https://doi.org/10.3390/fi12030056 - 20 Mar 2020
Viewed by 2982
Abstract
Recently, crowdsensing, which can provide various sensing services using consumer mobile devices, is attracting considerable attention. The success of these services depends on active user participation and, thus, a proper incentive mechanism is essential. However, if the sensing information provided by a user [...] Read more.
Recently, crowdsensing, which can provide various sensing services using consumer mobile devices, is attracting considerable attention. The success of these services depends on active user participation and, thus, a proper incentive mechanism is essential. However, if the sensing information provided by a user includes personal information, and an attacker compromises the service provider, participation will be less active. Accordingly, personal information protection is an important element in crowdsensing services. In this study, we resolve this problem by separating the steps of sensing data processing and the reward payment process. An arbitrary node in a sensing data processing pool consisting of user nodes is selected for sensing data processing, and only the processing results are sent to the service provider server to reward the data providing node. The proposed user-participatory crowdsensing system is implemented on the Kaa Internet of things (IoT) platform to evaluate its performance and demonstrate its feasibility. Full article
(This article belongs to the Special Issue Internet of Things for Smart City Applications)
Show Figures

Figure 1

20 pages, 393 KiB  
Review
Security of IoT Application Layer Protocols: Challenges and Findings
by Giuseppe Nebbione and Maria Carla Calzarossa
Future Internet 2020, 12(3), 55; https://doi.org/10.3390/fi12030055 - 17 Mar 2020
Cited by 77 | Viewed by 10472
Abstract
IoT technologies are becoming pervasive in public and private sectors and represent presently an integral part of our daily life. The advantages offered by these technologies are frequently coupled with serious security issues that are often not properly overseen or even ignored. The [...] Read more.
IoT technologies are becoming pervasive in public and private sectors and represent presently an integral part of our daily life. The advantages offered by these technologies are frequently coupled with serious security issues that are often not properly overseen or even ignored. The IoT threat landscape is extremely wide and complex and involves a wide variety of hardware and software technologies. In this framework, the security of application layer protocols is of paramount importance since these protocols are at the basis of the communications among applications and services running on different IoT devices and on cloud/edge infrastructures. This paper offers a comprehensive survey of application layer protocol security by presenting the main challenges and findings. More specifically, the paper focuses on the most popular protocols devised in IoT environments for messaging/data sharing and for service discovery. The main threats of these protocols as well as the Common Vulnerabilities and Exposures (CVE) for their products and services are analyzed and discussed in detail. Good practices and measures that can be adopted to mitigate threats and attacks are also investigated. Our findings indicate that ensuring security at the application layer is very challenging. IoT devices are exposed to numerous security risks due to lack of appropriate security services in the protocols as well as to vulnerabilities or incorrect configuration of the products and services being deployed. Moreover, the constrained capabilities of these devices affect the types of security services that can be implemented. Full article
(This article belongs to the Collection Featured Reviews of Future Internet Research)
Show Figures

Figure 1

14 pages, 531 KiB  
Article
Feature Selection Algorithms as One of the Python Data Analytical Tools
by Nikita Pilnenskiy and Ivan Smetannikov
Future Internet 2020, 12(3), 54; https://doi.org/10.3390/fi12030054 - 16 Mar 2020
Cited by 30 | Viewed by 6113
Abstract
With the current trend of rapidly growing popularity of the Python programming language for machine learning applications, the gap between machine learning engineer needs and existing Python tools increases. Especially, it is noticeable for more classical machine learning fields, namely, feature selection, as [...] Read more.
With the current trend of rapidly growing popularity of the Python programming language for machine learning applications, the gap between machine learning engineer needs and existing Python tools increases. Especially, it is noticeable for more classical machine learning fields, namely, feature selection, as the community attention in the last decade has mainly shifted to neural networks. This paper has two main purposes. First, we perform an overview of existing open-source Python and Python-compatible feature selection libraries, show their problems, if any, and demonstrate the gap between these libraries and the modern state of feature selection field. Then, we present new open-source scikit-learn compatible ITMO FS (Information Technologies, Mechanics and Optics University feature selection) library that is currently under development, explain how its architecture covers modern views on feature selection, and provide some code examples on how to use it with Python and its performance compared with other Python feature selection libraries. Full article
Show Figures

Figure 1

12 pages, 304 KiB  
Article
Consensus Crash Testing: Exploring Ripple’s Decentralization Degree in Adversarial Environments
by Klitos Christodoulou, Elias Iosif, Antonios Inglezakis and Marinos Themistocleous
Future Internet 2020, 12(3), 53; https://doi.org/10.3390/fi12030053 - 16 Mar 2020
Cited by 30 | Viewed by 4565
Abstract
The inception of Bitcoin as a peer-to-peer payment system, and its underlying blockchain data-structure and protocol, has led to an increased interest in deploying scalable and reliable distributed-ledger systems that build on robust consensus protocols. A critical requirement of such systems is [...] Read more.
The inception of Bitcoin as a peer-to-peer payment system, and its underlying blockchain data-structure and protocol, has led to an increased interest in deploying scalable and reliable distributed-ledger systems that build on robust consensus protocols. A critical requirement of such systems is to provide enough fault tolerance in the presence of adversarial attacks or network faults. This is essential to guarantee liveness when the network does not behave as expected and ensure that the underlying nodes agree on a unique order of transactions over a shared state. In comparison with traditional distributed systems, the deployment of a distributed-ledger system should take into account the hidden game theoretical aspects of such protocols, where actors are competing with each other in an environment which is likely to experience various well-motivated malicious and adversarial attacks. Firstly, this paper discusses the fundamental principles of existing consensus protocols in the context of both permissioned and permissionless distributed-ledger systems. The main contribution of this work deals with observations from experimenting with Ripple’s consensus protocol as it is embodied in the XRP Ledger. The main experimental finding suggests that, when a low percentage of malicious nodes is present, the centralization degree of the network can be significantly relaxed ensuring low convergence times. Those findings are of particular importance when engineering a consensus algorithm that would like to balance security with decentralization. Full article
(This article belongs to the Special Issue Blockchain: Current Challenges and Future Prospects/Applications)
Show Figures

Figure 1

14 pages, 2901 KiB  
Review
Emerging Trends and Innovation Modes of Internet Finance—Results from Co-Word and Co-Citation Networks
by Xiaoyu Li, Jiahong Yuan, Yan Shi, Zilai Sun and Junhu Ruan
Future Internet 2020, 12(3), 52; https://doi.org/10.3390/fi12030052 - 16 Mar 2020
Cited by 12 | Viewed by 4844
Abstract
Internet finance is a financial mode combining traditional financial industry with Internet technologies, which has become a crucial part of the financial field. Due to the rapid change of information technologies and public financial needs, Internet finance has produced quite a few specific [...] Read more.
Internet finance is a financial mode combining traditional financial industry with Internet technologies, which has become a crucial part of the financial field. Due to the rapid change of information technologies and public financial needs, Internet finance has produced quite a few specific operation modes, which have interested many scholars. To better appreciate its development process and innovation modes, we used bibliometrics to analyze 2,877 articles on Internet finance in Web of Science. Through the co-word network, co-citation network and various results generated by CiteSpace, we recognized six main modes of Internet finance, that is, Internet bank, peer to peer lending (P2P lending), crowdfunding, big data finance, digital currency and fintech. Emerging research topics and the development history of each mode are also detected. We find that the mainstream modes in current research are P2P lending and crowdfunding and the research on fintech and digital currency has just begun. Through the review, we also suggest some research directions for the research direction of each mode. These results will help to deepen relevant scholars’ understanding of Internet finance and provide guidance for them to choose research directions. Full article
(This article belongs to the Section Smart System Infrastructure and Applications)
Show Figures

Figure 1

2 pages, 139 KiB  
Editorial
The Internet of Things for Smart Environments
by Giuseppe Ruggeri, Valeria Loscrí, Marica Amadeo and Carlos T. Calafate
Future Internet 2020, 12(3), 51; https://doi.org/10.3390/fi12030051 - 14 Mar 2020
Cited by 5 | Viewed by 3367
Abstract
By leveraging the global interconnection of billions of tiny smart objects, the Internet of Things (IoT) paradigm is the main enabler of smart environments, ranging from smart cities to building automation, smart transportation, smart grids, and healthcare [...] Full article
(This article belongs to the Special Issue The Internet of Things for Smart Environments)
26 pages, 1830 KiB  
Article
Multi-formalism Models for Performance Engineering
by Enrico Barbierato, Marco Gribaudo and Giuseppe Serazzi
Future Internet 2020, 12(3), 50; https://doi.org/10.3390/fi12030050 - 13 Mar 2020
Viewed by 2851
Abstract
Nowadays, the necessity to predict the performance of cloud and edge computing-based architectures has become paramount, in order to respond to the pressure of data growth and more aggressive level of service agreements. In this respect, the problem can be analyzed by creating [...] Read more.
Nowadays, the necessity to predict the performance of cloud and edge computing-based architectures has become paramount, in order to respond to the pressure of data growth and more aggressive level of service agreements. In this respect, the problem can be analyzed by creating a model of a given system and studying the performance indices values generated by the model’s simulation. This process requires considering a set of paradigms, carefully balancing the benefits and the disadvantages of each one. While queuing networks are particularly suited to modeling cloud and edge computing architectures, particular occurrences—such as autoscaling—require different techniques to be analyzed. This work presents a review of paradigms designed to model specific events in different scenarios, such as timeout with quorum-based join, approximate computing with finite capacity region, MapReduce with class switch, dynamic provisioning in hybrid clouds, and batching of requests in e-Health applications. The case studies are investigated by implementing models based on the above-mentioned paradigms and analyzed with discrete event simulation techniques. Full article
(This article belongs to the Special Issue Performance Evaluation in the Era of Cloud and Edge Computing)
Show Figures

Figure 1

15 pages, 2282 KiB  
Review
A Review of the Control Plane Scalability Approaches in Software Defined Networking
by Abdelrahman Abuarqoub
Future Internet 2020, 12(3), 49; https://doi.org/10.3390/fi12030049 - 11 Mar 2020
Cited by 16 | Viewed by 5341
Abstract
Recent advances in information and communications cloud-based services hold the potential to overcome the scalability and complex maintenance limitations of traditional networks. Software Defined Networking (SDN) surfaced as a promising paradigm to mitigate such limitations while offering flexible networks management. Particularly, SDN separates [...] Read more.
Recent advances in information and communications cloud-based services hold the potential to overcome the scalability and complex maintenance limitations of traditional networks. Software Defined Networking (SDN) surfaced as a promising paradigm to mitigate such limitations while offering flexible networks management. Particularly, SDN separates the control plane from the data plane to achieve abstraction of lower-level functionality, hence, allowing more efficient network management and utilization. However, SDN suffers from various performance and scalability problems leading to significant research efforts on maximizing the scalability of the control plane. This paper aims at reviewing different SDN controller scalability, topology-based and mechanism-based approaches, as well as discussing and analyzing how they attempt to solve the scalability challenge. Furthermore, this paper elaborates on the promising research trends and challenges. Our insights are also discussed to stimulate further research efforts addressing the control plane scalability in SDN. Full article
Show Figures

Figure 1

12 pages, 819 KiB  
Article
Smart Devices Security Enhancement via Power Supply Monitoring
by Dimitrios Myridakis, Georgios Spathoulas, Athanasios Kakarountas and Dimitrios Schinianakis
Future Internet 2020, 12(3), 48; https://doi.org/10.3390/fi12030048 - 10 Mar 2020
Cited by 5 | Viewed by 3590
Abstract
The continuous growth of the number of Internet of Things (IoT) devices and their inclusion to public and private infrastructures has introduced new applciations to the market and our day-to-day life. At the same time, these devices create a potential threat to personal [...] Read more.
The continuous growth of the number of Internet of Things (IoT) devices and their inclusion to public and private infrastructures has introduced new applciations to the market and our day-to-day life. At the same time, these devices create a potential threat to personal and public security. This may be easily understood either due to the sensitivity of the collected data, or by our dependability to the devices’ operation. Considering that most IoT devices are of low cost and are used for various tasks, such as monitoring people or controlling indoor environmental conditions, the security factor should be enhanced. This paper presents the exploitation of side-channel attack technique for protecting low-cost smart devices in an intuitive way. The work aims to extend the dataset provided to an Intrusion Detection Systems (IDS) in order to achieve a higher accuracy in anomaly detection. Thus, along with typical data provided to an IDS, such as network traffic, transmitted packets, CPU usage, etc., it is proposed to include information regarding the device’s physical state and behaviour such as its power consumption, the supply current, the emitted heat, etc. Awareness of the typical operation of a smart device in terms of operation and functionality may prove valuable, since any deviation may warn of an operational or functional anomaly. In this paper, the deviation (either increase or decrease) of the supply current is exploited for this reason. This work aimed to affect the intrusion detection process of IoT and proposes for consideration new inputs of interest with a collateral interest of study. In parallel, malfunction of the device is also detected, extending this work’s application to issues of reliability and maintainability. The results present 100% attack detection and this is the first time that a low-cost security solution suitable for every type of target devices is presented. Full article
(This article belongs to the Special Issue Security and Reliability of IoT---Selected Papers from SecRIoT 2019)
Show Figures

Figure 1

12 pages, 1473 KiB  
Article
Improved Proactive Routing Protocol Considering Node Density Using Game Theory in Dense Networks
by Omuwa Oyakhire and Koichi Gyoda
Future Internet 2020, 12(3), 47; https://doi.org/10.3390/fi12030047 - 09 Mar 2020
Cited by 7 | Viewed by 3400
Abstract
In mobile ad hoc networks, network nodes cooperate by packet forwarding from the source to the destination. As the networks become denser, more control packets are forwarded, thus consuming more bandwidth and may cause packet loss. Recently, game theory has been applied to [...] Read more.
In mobile ad hoc networks, network nodes cooperate by packet forwarding from the source to the destination. As the networks become denser, more control packets are forwarded, thus consuming more bandwidth and may cause packet loss. Recently, game theory has been applied to address several problems in mobile ad hoc networks like energy efficiency. In this paper, we apply game theory to reduce the control packets in dense networks. We choose a proactive routing protocol, Optimized Link State Routing (OLSR) protocol. We consider two strategies in this method: willingness_always and willingness_never to reduce the multipoint relay (MPR) ratio in dense networks. Thus, nodes with less influence on other nodes are excluded from nomination as MPRs. Simulations were used to confirm the efficiency of using our improved method. The results show that the MPR ratio was significantly reduced, and packet delivery ratio was increased compared to the conventional protocol. Full article
(This article belongs to the Special Issue Machine Learning Advances Applied to Wireless Multi-hop IoT Networks)
Show Figures

Figure 1

25 pages, 5788 KiB  
Review
LPWAN Technologies: Emerging Application Characteristics, Requirements, and Design Considerations
by Bharat S. Chaudhari, Marco Zennaro and Suresh Borkar
Future Internet 2020, 12(3), 46; https://doi.org/10.3390/fi12030046 - 06 Mar 2020
Cited by 149 | Viewed by 14838
Abstract
Low power wide area network (LPWAN) is a promising solution for long range and low power Internet of Things (IoT) and machine to machine (M2M) communication applications. This paper focuses on defining a systematic and powerful approach of identifying the key characteristics of [...] Read more.
Low power wide area network (LPWAN) is a promising solution for long range and low power Internet of Things (IoT) and machine to machine (M2M) communication applications. This paper focuses on defining a systematic and powerful approach of identifying the key characteristics of such applications, translating them into explicit requirements, and then deriving the associated design considerations. LPWANs are resource-constrained networks and are primarily characterized by long battery life operation, extended coverage, high capacity, and low device and deployment costs. These characteristics translate into a key set of requirements including M2M traffic management, massive capacity, energy efficiency, low power operations, extended coverage, security, and interworking. The set of corresponding design considerations is identified in terms of two categories, desired or expected ones and enhanced ones, which reflect the wide range of characteristics associated with LPWAN-based applications. Prominent design constructs include admission and user traffic management, interference management, energy saving modes of operation, lightweight media access control (MAC) protocols, accurate location identification, security coverage techniques, and flexible software re-configurability. Topological and architectural options for interconnecting LPWAN entities are discussed. The major proprietary and standards-based LPWAN technology solutions available in the marketplace are presented. These include Sigfox, LoRaWAN, Narrowband IoT (NB-IoT), and long term evolution (LTE)-M, among others. The relevance of upcoming cellular 5G technology and its complementary relationship with LPWAN technology are also discussed. Full article
(This article belongs to the Collection Featured Reviews of Future Internet Research)
Show Figures

Figure 1

13 pages, 1421 KiB  
Article
Introducing External Knowledge to Answer Questions with Implicit Temporal Constraints over Knowledge Base
by Wenqing Wu, Zhenfang Zhu, Qiang Lu, Dianyuan Zhang and Qiangqiang Guo
Future Internet 2020, 12(3), 45; https://doi.org/10.3390/fi12030045 - 05 Mar 2020
Cited by 5 | Viewed by 3235
Abstract
Knowledge base question answering (KBQA) aims to analyze the semantics of natural language questions and return accurate answers from the knowledge base (KB). More and more studies have applied knowledge bases to question answering systems, and when using a KB to answer a [...] Read more.
Knowledge base question answering (KBQA) aims to analyze the semantics of natural language questions and return accurate answers from the knowledge base (KB). More and more studies have applied knowledge bases to question answering systems, and when using a KB to answer a natural language question, there are some words that imply the tense (e.g., original and previous) and play a limiting role in questions. However, most existing methods for KBQA cannot model a question with implicit temporal constraints. In this work, we propose a model based on a bidirectional attentive memory network, which obtains the temporal information in the question through attention mechanisms and external knowledge. Specifically, we encode the external knowledge as vectors, and use additive attention between the question and external knowledge to obtain the temporal information, then further enhance the question vector to increase the accuracy. On the WebQuestions benchmark, our method not only performs better with the overall data, but also has excellent performance regarding questions with implicit temporal constraints, which are separate from the overall data. As we use attention mechanisms, our method also offers better interpretability. Full article
Show Figures

Figure 1

14 pages, 477 KiB  
Article
RDTIDS: Rules and Decision Tree-Based Intrusion Detection System for Internet-of-Things Networks
by Mohamed Amine Ferrag, Leandros Maglaras, Ahmed Ahmim, Makhlouf Derdour and Helge Janicke
Future Internet 2020, 12(3), 44; https://doi.org/10.3390/fi12030044 - 02 Mar 2020
Cited by 158 | Viewed by 8398
Abstract
This paper proposes a novel intrusion detection system (IDS), named RDTIDS, for Internet-of-Things (IoT) networks. The RDTIDS combines different classifier approaches which are based on decision tree and rules-based concepts, namely, REP Tree, JRip algorithm and Forest PA. Specifically, the first and second [...] Read more.
This paper proposes a novel intrusion detection system (IDS), named RDTIDS, for Internet-of-Things (IoT) networks. The RDTIDS combines different classifier approaches which are based on decision tree and rules-based concepts, namely, REP Tree, JRip algorithm and Forest PA. Specifically, the first and second method take as inputs features of the data set, and classify the network traffic as Attack/Benign. The third classifier uses features of the initial data set in addition to the outputs of the first and the second classifier as inputs. The experimental results obtained by analyzing the proposed IDS using the CICIDS2017 dataset and BoT-IoT dataset, attest their superiority in terms of accuracy, detection rate, false alarm rate and time overhead as compared to state of the art existing schemes. Full article
(This article belongs to the Special Issue Security and Reliability of IoT---Selected Papers from SecRIoT 2019)
Show Figures

Figure 1

17 pages, 5994 KiB  
Article
BASN—Learning Steganography with a Binary Attention Mechanism
by Pin Wu, Xuting Chang, Yang Yang and Xiaoqiang Li
Future Internet 2020, 12(3), 43; https://doi.org/10.3390/fi12030043 - 27 Feb 2020
Cited by 2 | Viewed by 3672
Abstract
Secret information sharing through image carriers has aroused much research attention in recent years with images’ growing domination on the Internet and mobile applications. The technique of embedding secret information in images without being detected is called image steganography. With the booming trend [...] Read more.
Secret information sharing through image carriers has aroused much research attention in recent years with images’ growing domination on the Internet and mobile applications. The technique of embedding secret information in images without being detected is called image steganography. With the booming trend of convolutional neural networks (CNN), neural-network-automated tasks have been embedded more deeply in our daily lives. However, a series of wrong labeling or bad captioning on the embedded images has left a trace of skepticism and finally leads to a self-confession like exposure. To improve the security of image steganography and minimize task result distortion, models must maintain the feature maps generated by task-specific networks being irrelative to any hidden information embedded in the carrier. This paper introduces a binary attention mechanism into image steganography to help alleviate the security issue, and, in the meantime, increase embedding payload capacity. The experimental results show that our method has the advantage of high payload capacity with little feature map distortion and still resist detection by state-of-the-art image steganalysis algorithms. Full article
Show Figures

Figure 1

14 pages, 1553 KiB  
Article
Language Cognition and Pronunciation Training Using Applications
by Ming Sung Kan and Atsushi Ito
Future Internet 2020, 12(3), 42; https://doi.org/10.3390/fi12030042 - 25 Feb 2020
Cited by 2 | Viewed by 5228
Abstract
In language learning, adults seem to be superior in their ability to memorize knowledge of new languages and have better learning strategies, experiences, and intelligence to be able to integrate new knowledge. However, unless one learns pronunciation in childhood, it is almost impossible [...] Read more.
In language learning, adults seem to be superior in their ability to memorize knowledge of new languages and have better learning strategies, experiences, and intelligence to be able to integrate new knowledge. However, unless one learns pronunciation in childhood, it is almost impossible to reach a native-level accent. In this research, we take the difficulties of learning tonal pronunciation in Mandarin as an example and analyze the difficulties of tone learning and the deficiencies of general learning methods using the cognitive load theory. With the tasks designed commensurate with the learner’s perception ability based on perception experiments and small-step learning, the perception training app is more effective for improving the tone pronunciation ability compared to existing apps with voice analysis function. Furthermore, the learning effect was greatly improved by optimizing the app interface and operation procedures. However, as a result of the combination of pronunciation practice and perception training, pronunciation practice with insufficient feedback could lead to pronunciation errors. Therefore, we also studied pronunciation practice using machine learning and aimed to train the model for the pronunciation task design instead of classification. We used voices designed as training data and trained a model for pronunciation training, and demonstrated that supporting pronunciation practice with machine learning is practicable. Full article
(This article belongs to the Special Issue Cognitive Infocommunications–Theory and Applications)
Show Figures

Figure 1

Previous Issue
Next Issue
Back to TopTop