Next Issue
Volume 12, December
Previous Issue
Volume 12, October
 
 

Future Internet, Volume 12, Issue 11 (November 2020) – 30 articles

Cover Story (view full-size image): Fast communication, especially at a large distance, is of high importance. Recently, the increased data demand and crowded radio frequency spectrum have become crucial issues. As an alternative to wire communication systems, free-space optical communication (FSO) was developed. It allows efficient voice, video, and data transmission. Due to large bandwidth, FSO can be used in various applications. The rapid development of high-speed connection technology allows reducing repair downtime and gives the ability to quickly establish a backup network in an emergency. In this paper, the communication history from mirrors and optical telegraph to modern wireless systems, in particular free-space optical communication, and outline of future development directions of optical communication are discussed. View this paper
  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Reader to open them.
Order results
Result details
Section
Select all
Export citation of selected articles as:
21 pages, 2423 KiB  
Article
ndnIoT-FC: IoT Devices as First-Class Traffic in Name Data Networks
by Luís Gameiro, Carlos Senna and Miguel Luís
Future Internet 2020, 12(11), 207; https://doi.org/10.3390/fi12110207 - 21 Nov 2020
Cited by 10 | Viewed by 2655
Abstract
In recent years we have been assisting a radical change in the way devices are connected to the Internet. In this new scope, the traditional TCP/IP host-centric network fails in large-scale mobile wireless distributed environments, such as IoT scenarios, due to node mobility, [...] Read more.
In recent years we have been assisting a radical change in the way devices are connected to the Internet. In this new scope, the traditional TCP/IP host-centric network fails in large-scale mobile wireless distributed environments, such as IoT scenarios, due to node mobility, dynamic topologies and intermittent connectivity, and the Information-Centric Networking (ICN) paradigm has been considered the most promising candidate to overcome the drawbacks of host-centric architectures. Despite bringing efficient solutions for content distribution, the basic ICN operating principle, where content must always be associated with an interest, has serious restrictions in IoT environments in relation to scale, performance, and naming, among others. To address such drawbacks, we are presenting ndnIoT-FC, an NDN-based architecture that respects the ICN rules but offers special treatment for IoT traffic. It combines efficient hybrid naming with strategies to minimize the number of interests and uses caching strategies that virtually eliminates copies of IoT data from intermediate nodes. The ndnIoT-FC makes available new NDN-based application-to-application protocol to implement a signature model operation and tools to manage its life cycle, following a publisher-subscriber scheme. To demonstrate the versatility of the proposed architecture, we show the results of the efficient gathering of environmental information in a simulation environment considering different and distinct use cases. Full article
(This article belongs to the Special Issue Feature Papers for Future Internet—Internet of Things Section)
Show Figures

Figure 1

30 pages, 5876 KiB  
Article
Monitoring and Support for Elderly People Using LoRa Communication Technologies: IoT Concepts and Applications
by José Paulo Lousado and Sandra Antunes
Future Internet 2020, 12(11), 206; https://doi.org/10.3390/fi12110206 - 20 Nov 2020
Cited by 26 | Viewed by 4575
Abstract
The pandemic declared by the World Health Organization due to the SARS-CoV-2 virus (COVID-19) awakened us to a reality that most of us were previously unaware of—isolation, confinement and the massive use of information and communication technologies, as well as increased knowledge of [...] Read more.
The pandemic declared by the World Health Organization due to the SARS-CoV-2 virus (COVID-19) awakened us to a reality that most of us were previously unaware of—isolation, confinement and the massive use of information and communication technologies, as well as increased knowledge of the difficulties and limitations of their use. This article focuses on the rapid implementation of low-cost technologies, which allow us to answer a fundamental question: how can near real-time monitoring and follow-up of the elderly and their health conditions, as well as their homes, especially for those living in isolated and remote areas, be provided within their care and protect them from risky events? The system proposed here as a proof of concept uses low-cost devices for communication and data processing, supported by Long-Range (LoRa) technology and connection to The Things Network, incorporating various sensors, both personal and in the residence, allowing family members, neighbors and authorized entities, including security forces, to have access to the health condition of system users and the habitability of their homes, as well as their urgent needs, thus evidencing that it is possible, using low-cost systems, to implement sensor networks for monitoring the elderly using the LoRa gateway and other support infrastructures. Full article
(This article belongs to the Special Issue Data Science and Knowledge Discovery)
Show Figures

Figure 1

18 pages, 365 KiB  
Article
Failure Mode and Effect Analysis for Cyber-Physical Systems
by João Oliveira, Gonçalo Carvalho, Bruno Cabral and Jorge Bernardino
Future Internet 2020, 12(11), 205; https://doi.org/10.3390/fi12110205 - 20 Nov 2020
Cited by 12 | Viewed by 2975
Abstract
Cyber-Physical Systems (CPS) are a prominent component of the modern digital transformation, which combines the dynamics of the physical processes with those of software and networks. Critical infrastructures have built-in CPS, and assessing its risk is crucial to avoid significant losses, both economic [...] Read more.
Cyber-Physical Systems (CPS) are a prominent component of the modern digital transformation, which combines the dynamics of the physical processes with those of software and networks. Critical infrastructures have built-in CPS, and assessing its risk is crucial to avoid significant losses, both economic and social. As CPS are increasingly attached to the world’s main industries, these systems’ criticality depends not only on software efficiency and availability but also on cyber-security awareness. Given this, and because Failure Mode and Effect Analysis (FMEA) is one of the most effective methods to assess critical infrastructures’ risk, in this paper, we show how this method performs in the analysis of CPS threats, also exposing the main drawbacks concerning CPS risk assessment. We first propose a risk prevention analysis to the Communications-Based Train Control (CBTC) system, which involves exploiting cyber vulnerabilities, and we introduce a novel approach to the failure modes’ Risk Priority Number (RPN) estimation. We also propose how to adapt the FMEA method to the requirement of CPS risk evaluation. We applied the proposed procedure to the CBTC system use case since it is a CPS with a substantial cyber component and network data transfer. Full article
(This article belongs to the Special Issue Feature Papers for Future Internet—Cybersecurity Section)
Show Figures

Figure 1

15 pages, 257 KiB  
Article
Digital Competence and Gender: Teachers in Training. A Case Study
by Mario Grande-de-Prado, Ruth Cañón, Sheila García-Martín and Isabel Cantón
Future Internet 2020, 12(11), 204; https://doi.org/10.3390/fi12110204 - 20 Nov 2020
Cited by 26 | Viewed by 4045
Abstract
The ICTs are simultaneously an important tool and subject in teacher training. It, therefore, follows that digital competence is fundamental and constitutes a significant educational challenge, particularly the digital divide or gap by gender. The aim is to identify and analyze self-perceptions of [...] Read more.
The ICTs are simultaneously an important tool and subject in teacher training. It, therefore, follows that digital competence is fundamental and constitutes a significant educational challenge, particularly the digital divide or gap by gender. The aim is to identify and analyze self-perceptions of digital skills, and the possible relationship of these to gender, in first-year university students taking a degree in primary education teacher training at a Spanish faculty of education. This is a descriptive study using ex-post-facto method and collecting data from a questionnaire administered for four consecutive years to the above-mentioned subjects. The results revealed gender differences in the students’ reported perceptions. Men were more likely to perceive themselves as competent in the use of ICTs, reporting better information management and online collaboration skills using digital media. Besides, they made more use of computers as their sole device for browsing, downloading, and streaming and felt more confident about solving problems with devices. In contrast, women reported making more use of mobile phones and were more familiar with social media and aspects related to image and text processing and graphic design. Full article
28 pages, 6436 KiB  
Article
Pulverization in Cyber-Physical Systems: Engineering the Self-Organizing Logic Separated from Deployment
by Roberto Casadei, Danilo Pianini, Andrea Placuzzi, Mirko Viroli and Danny Weyns
Future Internet 2020, 12(11), 203; https://doi.org/10.3390/fi12110203 - 19 Nov 2020
Cited by 27 | Viewed by 2934
Abstract
Emerging cyber-physical systems, such as robot swarms, crowds of augmented people, and smart cities, require well-crafted self-organizing behavior to properly deal with dynamic environments and pervasive disturbances. However, the infrastructures providing networking and computing services to support these systems are becoming increasingly complex, [...] Read more.
Emerging cyber-physical systems, such as robot swarms, crowds of augmented people, and smart cities, require well-crafted self-organizing behavior to properly deal with dynamic environments and pervasive disturbances. However, the infrastructures providing networking and computing services to support these systems are becoming increasingly complex, layered and heterogeneous—consider the case of the edge–fog–cloud interplay. This typically hinders the application of self-organizing mechanisms and patterns, which are often designed to work on flat networks. To promote reuse of behavior and flexibility in infrastructure exploitation, we argue that self-organizing logic should be largely independent of the specific application deployment. We show that this separation of concerns can be achieved through a proposed “pulverization approach”: the global system behavior of application services gets broken into smaller computational pieces that are continuously executed across the available hosts. This model can then be instantiated in the aggregate computing framework, whereby self-organizing behavior is specified compositionally. We showcase how the proposed approach enables expressing the application logic of a self-organizing cyber-physical system in a deployment-independent fashion, and simulate its deployment on multiple heterogeneous infrastructures that include cloud, edge, and LoRaWAN network elements. Full article
Show Figures

Figure 1

13 pages, 893 KiB  
Article
Portfolio Learning Based on Deep Learning
by Wei Pan, Jide Li and Xiaoqiang Li
Future Internet 2020, 12(11), 202; https://doi.org/10.3390/fi12110202 - 18 Nov 2020
Cited by 4 | Viewed by 2694
Abstract
Traditional portfolio theory divides stocks into different categories using indicators such as industry, market value, and liquidity, and then selects representative stocks according to them. In this paper, we propose a novel portfolio learning approach based on deep learning and apply it to [...] Read more.
Traditional portfolio theory divides stocks into different categories using indicators such as industry, market value, and liquidity, and then selects representative stocks according to them. In this paper, we propose a novel portfolio learning approach based on deep learning and apply it to China’s stock market. Specifically, this method is based on the similarity of deep features extracted from candlestick charts. First, we obtained whole stock information from Tushare, a professional financial data interface. These raw time series data are then plotted into candlestick charts to make an image dataset for studying the stock market. Next, the method extracts high-dimensional features from candlestick charts through an autoencoder. After that, K-means is used to cluster these high-dimensional features. Finally, we choose one stock from each category according to the Sharpe ratio and a low-risk, high-return portfolio is obtained. Extensive experiments are conducted on stocks in the Chinese stock market for evaluation. The results demonstrate that the proposed portfolio outperforms the market’s leading funds and the Shanghai Stock Exchange Composite Index (SSE Index) in a number of metrics. Full article
Show Figures

Figure 1

13 pages, 4639 KiB  
Article
Geospatial Assessment of the Territorial Road Network by Fractal Method
by Mikolaj Karpinski, Svitlana Kuznichenko, Nadiia Kazakova, Oleksii Fraze-Frazenko and Daniel Jancarczyk
Future Internet 2020, 12(11), 201; https://doi.org/10.3390/fi12110201 - 17 Nov 2020
Cited by 10 | Viewed by 2429
Abstract
This paper proposes an approach to the geospatial assessment of a territorial road network based on the fractals theory. This approach allows us to obtain quantitative values of spatial complexity for any transport network and, in contrast to the classical indicators of the [...] Read more.
This paper proposes an approach to the geospatial assessment of a territorial road network based on the fractals theory. This approach allows us to obtain quantitative values of spatial complexity for any transport network and, in contrast to the classical indicators of the transport provisions of a territory (Botcher, Henkel, Engel, Goltz, Uspensky, etc.), consider only the complexity level of the network itself, regardless of the area of the territory. The degree of complexity is measured by a fractal dimension. A method for calculating the fractal dimension based on a combination of box counting and GIS analysis is proposed. We created a geoprocessing script tool for the GIS software system ESRI ArcGIS 10.7, and a study of the spatial pattern of the transport network of the Ukraine territory, and other countries of the world, was made. The results of the study will help to better understand the different aspects of the development of transport networks, their changes over time and the impact on the socioeconomic indicators of urban development. Full article
(This article belongs to the Special Issue Data Science and Knowledge Discovery)
Show Figures

Figure 1

14 pages, 8991 KiB  
Article
AT-Text: Assembling Text Components for Efficient Dense Scene Text Detection
by Haiyan Li and Hongtao Lu
Future Internet 2020, 12(11), 200; https://doi.org/10.3390/fi12110200 - 17 Nov 2020
Cited by 3 | Viewed by 1976
Abstract
Text detection is a prerequisite for text recognition in scene images. Previous segmentation-based methods for detecting scene text have already achieved a promising performance. However, these kinds of approaches may produce spurious text instances, as they usually confuse the boundary of dense text [...] Read more.
Text detection is a prerequisite for text recognition in scene images. Previous segmentation-based methods for detecting scene text have already achieved a promising performance. However, these kinds of approaches may produce spurious text instances, as they usually confuse the boundary of dense text instances, and then infer word/text line instances relying heavily on meticulous heuristic rules. We propose a novel Assembling Text Components (AT-text) that accurately detects dense text in scene images. The AT-text localizes word/text line instances in a bottom-up mechanism by assembling a parsimonious component set. We employ a segmentation model that encodes multi-scale text features, considerably improving the classification accuracy of text/non-text pixels. The text candidate components are finely classified and selected via discriminate segmentation results. This allows the AT-text to efficiently filter out false-positive candidate components, and then to assemble the remaining text components into different text instances. The AT-text works well on multi-oriented and multi-language text without complex post-processing and character-level annotation. Compared with the existing works, it achieves satisfactory results and a considerable balance between precision and recall without a large margin in ICDAR2013 and MSRA-TD 500 public benchmark datasets. Full article
Show Figures

Figure 1

20 pages, 4597 KiB  
Article
An Interoperable UMLS Terminology Service Using FHIR
by Rishi Saripalle, Mehdi Sookhak and Mahboobeh Haghparast
Future Internet 2020, 12(11), 199; https://doi.org/10.3390/fi12110199 - 16 Nov 2020
Cited by 5 | Viewed by 3680
Abstract
The Unified Medical Language System (UMLS) is an internationally recognized medical vocabulary that enables semantic interoperability across various biomedical terminologies. To use its knowledge, the users must understand its complex knowledge structure, a structure that is not interoperable or is not compliant with [...] Read more.
The Unified Medical Language System (UMLS) is an internationally recognized medical vocabulary that enables semantic interoperability across various biomedical terminologies. To use its knowledge, the users must understand its complex knowledge structure, a structure that is not interoperable or is not compliant with any known biomedical and healthcare standard. Further, the users also need to have good technical skills to understand its inner working and interact with UMLS in general. These barriers might cause UMLS usage concerns among inter-disciplinary users in biomedical and healthcare informatics. Currently, there exists no terminology service that normalizes UMLS’s complex knowledge structure to a widely accepted interoperable healthcare standard and allows easy access to its knowledge, thus hiding its workings. The objective of this research is to design and implement a light-weight terminology service that allows easy access to UMLS knowledge structured using the fast health interoperability resources (FHIR) standard, a widely accepted interoperability healthcare standard. The developed terminology service, named UMLS FHIR, leverages FHIR resources and features, and can easily be integrated into any application to consume UMLS knowledge in the FHIR format without the need to understand UMLS’s native knowledge structure and its internal working. Full article
(This article belongs to the Special Issue Recent Advances of Machine Learning Techniques on Smartphones)
Show Figures

Graphical abstract

17 pages, 2264 KiB  
Review
On the Modeling of Automotive Security: A Survey of Methods and Perspectives
by Jingjing Hao and Guangsheng Han
Future Internet 2020, 12(11), 198; https://doi.org/10.3390/fi12110198 - 16 Nov 2020
Cited by 12 | Viewed by 3478
Abstract
As the intelligent car-networking represents the new direction of the future vehicular development, automotive security plays an increasingly important role in the whole car industry chain. On condition that the accompanying problems of security are proofed, vehicles will provide more convenience while ensuring [...] Read more.
As the intelligent car-networking represents the new direction of the future vehicular development, automotive security plays an increasingly important role in the whole car industry chain. On condition that the accompanying problems of security are proofed, vehicles will provide more convenience while ensuring safety. Security models can be utilized as tools to rationalize the security of the automotive system and represent it in a structured manner. It is essential to improve the knowledge about security models by comparing them besides proposing new methods. This paper aims to give a comprehensive introduction to the topic of security models for the Intelligent Transport System (ITS). A survey of the current methodologies for security modeling is conducted and a classification scheme is subsequently proposed. Furthermore, the existing framework and methods to build automotive security models are broadly examined according to the features of automotive electronic system. A number of fundamental aspects are defined to compare the presented methods in order to comprehend the automotive security modeling in depth. Full article
(This article belongs to the Section Cybersecurity)
Show Figures

Figure 1

15 pages, 1066 KiB  
Article
An Organized Repository of Ethereum Smart Contracts’ Source Codes and Metrics
by Giuseppe Antonio Pierro, Roberto Tonelli and Michele Marchesi
Future Internet 2020, 12(11), 197; https://doi.org/10.3390/fi12110197 - 15 Nov 2020
Cited by 29 | Viewed by 3781
Abstract
Many empirical software engineering studies show that there is a need for repositories where source codes are acquired, filtered and classified. During the last few years, Ethereum block explorer services have emerged as a popular project to explore and search for Ethereum blockchain [...] Read more.
Many empirical software engineering studies show that there is a need for repositories where source codes are acquired, filtered and classified. During the last few years, Ethereum block explorer services have emerged as a popular project to explore and search for Ethereum blockchain data such as transactions, addresses, tokens, smart contracts’ source codes, prices and other activities taking place on the Ethereum blockchain. Despite the availability of this kind of service, retrieving specific information useful to empirical software engineering studies, such as the study of smart contracts’ software metrics, might require many subtasks, such as searching for specific transactions in a block, parsing files in HTML format, and filtering the smart contracts to remove duplicated code or unused smart contracts. In this paper, we afford this problem by creating Smart Corpus, a corpus of smart contracts in an organized, reasoned and up-to-date repository where Solidity source code and other metadata about Ethereum smart contracts can easily and systematically be retrieved. We present Smart Corpus’s design and its initial implementation, and we show how the data set of smart contracts’ source codes in a variety of programming languages can be queried and processed to get useful information on smart contracts and their software metrics. Smart Corpus aims to create a smart-contract repository where smart-contract data (source code, application binary interface (ABI) and byte code) are freely and immediately available and are classified based on the main software metrics identified in the scientific literature. Smart contracts’ source codes have been validated by EtherScan, and each contract comes with its own associated software metrics as computed by the freely available software PASO. Moreover, Smart Corpus can be easily extended as the number of new smart contracts increases day by day. Full article
Show Figures

Figure 1

13 pages, 1661 KiB  
Article
Proposal and Investigation of an Artificial Intelligence (AI)-Based Cloud Resource Allocation Algorithm in Network Function Virtualization Architectures
by Vincenzo Eramo, Francesco Giacinto Lavacca, Tiziana Catena and Paul Jaime Perez Salazar
Future Internet 2020, 12(11), 196; https://doi.org/10.3390/fi12110196 - 13 Nov 2020
Cited by 8 | Viewed by 2120
Abstract
The high time needed to reconfigure cloud resources in Network Function Virtualization network environments has led to the proposal of solutions in which a prediction based-resource allocation is performed. All of them are based on traffic or needed resource prediction with the minimization [...] Read more.
The high time needed to reconfigure cloud resources in Network Function Virtualization network environments has led to the proposal of solutions in which a prediction based-resource allocation is performed. All of them are based on traffic or needed resource prediction with the minimization of symmetric loss functions like Mean Squared Error. When inevitable prediction errors are made, the prediction methodologies are not able to differently weigh positive and negative prediction errors that could impact the total network cost. In fact if the predicted traffic is higher than the real one then an over allocation cost, referred to as over-provisioning cost, will be paid by the network operator; conversely, in the opposite case, Quality of Service degradation cost, referred to as under-provisioning cost, will be due to compensate the users because of the resource under allocation. In this paper we propose and investigate a resource allocation strategy based on a Long Short Term Memory algorithm in which the training operation is based on the minimization of an asymmetric cost function that differently weighs the positive and negative prediction errors and the corresponding over-provisioning and under-provisioning costs. In a typical traffic and network scenario, the proposed solution allows for a cost saving by 30% with respect to the case of solution with symmetric cost function. Full article
Show Figures

Figure 1

23 pages, 4021 KiB  
Article
E-Marketplace as a Tool for the Revitalization of Portuguese Craft Industry: The Design Process in the Development of an Online Platform
by Nuno Martins, Daniel Brandão, Heitor Alvelos and Sara Silva
Future Internet 2020, 12(11), 195; https://doi.org/10.3390/fi12110195 - 12 Nov 2020
Cited by 10 | Viewed by 4499
Abstract
The craft trade in Portugal faces challenges that compromise its productive and economic sustainability and may result in the disappearance of millenary techniques, traditions, and industrial practices of high symbolic and historical value. The growing incompatibility of these traditional activities with digital technologies, [...] Read more.
The craft trade in Portugal faces challenges that compromise its productive and economic sustainability and may result in the disappearance of millenary techniques, traditions, and industrial practices of high symbolic and historical value. The growing incompatibility of these traditional activities with digital technologies, the lack of resources, and a growing age gap are among the main problems identified. This situation made worse by various restrictions pertaining to the COVID-19 pandemic points towards the possibility of extinction of this type of manual arts. The goal of this research is to demonstrate how the design process of an e-marketplace platform, throughout its different phases, may contribute to the revitalization of traditional industries. The methodologies adopted in the framework consisted in the study of UX and UI best design practices, including wireframe design, user flows, definition of personas, development of prototypes, and style guides. The results of the conducted usability tests to the prototype allowed a gradual improvement of the solution, culminating in the confirmation of its effectiveness. The study concluded that digital technology, namely a designed e-marketplace solution, could potentially bring buyers and sellers closer together, thus being a tool with high potential for the dissemination and sustainability of the craft industry. Full article
(This article belongs to the Section Techno-Social Smart Systems)
Show Figures

Figure 1

14 pages, 801 KiB  
Article
Homogeneous Data Normalization and Deep Learning: A Case Study in Human Activity Classification
by Ivan Miguel Pires, Faisal Hussain, Nuno M. Garcia, Petre Lameski and Eftim Zdravevski
Future Internet 2020, 12(11), 194; https://doi.org/10.3390/fi12110194 - 10 Nov 2020
Cited by 24 | Viewed by 3711
Abstract
One class of applications for human activity recognition methods is found in mobile devices for monitoring older adults and people with special needs. Recently, many studies were performed to create intelligent methods for the recognition of human activities. However, the different mobile devices [...] Read more.
One class of applications for human activity recognition methods is found in mobile devices for monitoring older adults and people with special needs. Recently, many studies were performed to create intelligent methods for the recognition of human activities. However, the different mobile devices in the market acquire the data from sensors at different frequencies. This paper focuses on implementing four data normalization techniques, i.e., MaxAbsScaler, MinMaxScaler, RobustScaler, and Z-Score. Subsequently, we evaluate the impact of the normalization algorithms with deep neural networks (DNN) for the classification of the human activities. The impact of the data normalization was counterintuitive, resulting in a degradation of performance. Namely, when using the accelerometer data, the accuracy dropped from about 79% to only 53% for the best normalization approach. Similarly, for the gyroscope data, the accuracy without normalization was about 81.5%, whereas with the best normalization, it was only 60%. It can be concluded that data normalization techniques are not helpful in classification problems with homogeneous data. Full article
(This article belongs to the Special Issue Future Intelligent Systems and Networks 2020-2021)
Show Figures

Figure 1

14 pages, 365 KiB  
Article
High Throughput Data Relay in UAV Wireless Networks
by Fenyu Jiang and Chris Phillips
Future Internet 2020, 12(11), 193; https://doi.org/10.3390/fi12110193 - 09 Nov 2020
Cited by 3 | Viewed by 2036
Abstract
As a result of their high mobility and reduced cost, Unmanned Aerial Vehicles (UAVs) have been found to be a promising tool in wireless networks. A UAV can perform the role of a base station as well as a mobile relay, connecting distant [...] Read more.
As a result of their high mobility and reduced cost, Unmanned Aerial Vehicles (UAVs) have been found to be a promising tool in wireless networks. A UAV can perform the role of a base station as well as a mobile relay, connecting distant ground terminals. In this paper, we dispatch a UAV to a disaster area to help relay information for victims. We involve a bandwidth efficient technique called the Dual-Sampling (DS) method when planning the UAV flight trajectory, trying to maximize the data transmission throughput. We propose an iterative algorithm for solving this problem. The victim bandwidth scheduling and the UAV trajectory are alternately optimized in each iteration, meanwhile a power balance mechanism is implemented in the algorithm to ensure the proper functioning of the DS method. We compare the results of the DS-enabled scheme with two non-DS schemes, namely a fair bandwidth allocation scheme and a bandwidth contention scheme. The DS scheme outperforms the other two non-DS schemes regarding max-min average data rate among all the ground victims. Furthermore, we derive the theoretical optimal performance of the DS scheme for a given scenario, and find that the proposed approach can be regarded as a general method to solve this optimization problem. We also observe that the optimal UAV trajectory for the DS scheme is quite different from that of the non-DS bandwidth contention scheme. Full article
(This article belongs to the Section Internet of Things)
Show Figures

Figure 1

18 pages, 2394 KiB  
Article
A Probabilistic VDTN Routing Scheme Based on Hybrid Swarm-Based Approach
by Youcef Azzoug, Abdelmadjid Boukra and Vasco N. G. J. Soares
Future Internet 2020, 12(11), 192; https://doi.org/10.3390/fi12110192 - 07 Nov 2020
Cited by 4 | Viewed by 1889
Abstract
The probabilistic Delay Tolerant Network (DTN) routing has been adjusted for vehicular network (VANET) routing through numerous works exploiting the historic routing profile of nodes to forward bundles through better Store-Carry-and-Forward (SCF) relay nodes. In this paper, we propose a new hybrid swarm-inspired [...] Read more.
The probabilistic Delay Tolerant Network (DTN) routing has been adjusted for vehicular network (VANET) routing through numerous works exploiting the historic routing profile of nodes to forward bundles through better Store-Carry-and-Forward (SCF) relay nodes. In this paper, we propose a new hybrid swarm-inspired probabilistic Vehicular DTN (VDTN) router to optimize the next-SCF vehicle selection using the combination of two bio-metaheuristic techniques called the Firefly Algorithm (FA) and the Glowworm Swarm Optimization (GSO). The FA-based strategy exploits the stochastic intelligence of fireflies in moving toward better individuals, while the GSO-based strategy mimics the movement of glowworm towards better area for displacing and food foraging. Both FA and GSO are executed simultaneously on each node to track better SCF vehicles towards each bundle’s destination. A geography-based recovery method is performed in case no better SCF vehicles are found using the hybrid FA–GSO approach. The proposed FA–GSO VDTN scheme is compared to ProPHET and GeoSpray routers. The simulation results indicated optimized bundles flooding levels and higher profitability of combined delivery delay and delivery probability. Full article
(This article belongs to the Special Issue Delay-Tolerant Networking)
Show Figures

Figure 1

23 pages, 822 KiB  
Article
An Internet of Things (IoT) Acceptance Model. Assessing Consumer’s Behavior toward IoT Products and Applications
by Maria Tsourela and Dafni-Maria Nerantzaki
Future Internet 2020, 12(11), 191; https://doi.org/10.3390/fi12110191 - 03 Nov 2020
Cited by 37 | Viewed by 5912
Abstract
A common managerial and theoretical concern is to know how individuals perceive Internet of Things (IoT) products and applications and how to accelerate adoption of them. The purpose of the current study is to answer, “What are the factors that define behavioral intention [...] Read more.
A common managerial and theoretical concern is to know how individuals perceive Internet of Things (IoT) products and applications and how to accelerate adoption of them. The purpose of the current study is to answer, “What are the factors that define behavioral intention to adopt IoT products and applications among individuals?” An IoT adoption model was developed and tested, incorporating pull factors from two different information impact sources: technical and psychological. This study employs statistical structural equation modeling (SEM) in order to examine the conceptual IoT acceptance model. It is demonstrated that facilitated appropriation, perceived usefulness and perceived ease of use, as mediators, significantly influence consumers’ attitude and behavioral intention towards IoT products and applications. User character, cyber resilience, cognitive instrumentals, social influence and trust, all with different significance rates, exhibited an indirect effect, through the three mediators. The IoT acceptance model (IoTAM) upgrades current knowledge on consumers’ behavioral intention and equips practitioners with the knowledge needed to create successful integrated marketing tactics and communication strategies. It provides a solid base for examining multirooted models for the acceptance of newly formed technologies, as it bridges the discontinuity in migrating from information and communication technologies (ICTs) to IoT adoption studies, causing distortions to societies’ abilities to make informed decisions about IoT adoption and use. Full article
(This article belongs to the Special Issue Feature Papers for Future Internet—Internet of Things Section)
Show Figures

Graphical abstract

28 pages, 9493 KiB  
Review
Fog Computing for Smart Cities’ Big Data Management and Analytics: A Review
by Elarbi Badidi, Zineb Mahrez and Essaid Sabir
Future Internet 2020, 12(11), 190; https://doi.org/10.3390/fi12110190 - 31 Oct 2020
Cited by 39 | Viewed by 5721
Abstract
Demographic growth in urban areas means that modern cities face challenges in ensuring a steady supply of water and electricity, smart transport, livable space, better health services, and citizens’ safety. Advances in sensing, communication, and digital technologies promise to mitigate these challenges. Hence, [...] Read more.
Demographic growth in urban areas means that modern cities face challenges in ensuring a steady supply of water and electricity, smart transport, livable space, better health services, and citizens’ safety. Advances in sensing, communication, and digital technologies promise to mitigate these challenges. Hence, many smart cities have taken a new step in moving away from internal information technology (IT) infrastructure to utility-supplied IT delivered over the Internet. The benefit of this move is to manage the vast amounts of data generated by the various city systems, including water and electricity systems, the waste management system, transportation system, public space management systems, health and education systems, and many more. Furthermore, many smart city applications are time-sensitive and need to quickly analyze data to react promptly to the various events occurring in a city. The new and emerging paradigms of edge and fog computing promise to address big data storage and analysis in the field of smart cities. Here, we review existing service delivery models in smart cities and present our perspective on adopting these two emerging paradigms. We specifically describe the design of a fog-based data pipeline to address the issues of latency and network bandwidth required by time-sensitive smart city applications. Full article
(This article belongs to the Special Issue Emerging Trends of Fog Computing in Internet of Things Applications)
Show Figures

Figure 1

50 pages, 2645 KiB  
Article
Password Managers—It’s All about Trust and Transparency
by Fahad Alodhyani, George Theodorakopoulos and Philipp Reinecke
Future Internet 2020, 12(11), 189; https://doi.org/10.3390/fi12110189 - 30 Oct 2020
Cited by 11 | Viewed by 6545
Abstract
A password is considered to be the first line of defence in protecting online accounts, but there are problems when people handle their own passwords, for example, password reuse and difficult to memorize. Password managers appear to be a promising solution to help [...] Read more.
A password is considered to be the first line of defence in protecting online accounts, but there are problems when people handle their own passwords, for example, password reuse and difficult to memorize. Password managers appear to be a promising solution to help people handle their passwords. However, there is low adoption of password managers, even though they are widely available, and there are fewer studies on users of password managers. Therefore, the issues that cause people not to use password managers must be investigated and, more generally, what users think about them and the user interfaces of password managers. In this paper, we report three studies that we conducted: on user interfaces and the functions of three password managers; a usability test and an interview study; and an online questionnaire study about users and non-users of password managers, which also compares experts and non-experts regarding their use (or non-use) of password managers. Our findings show that usability is not a major problem, rather lack of trust and transparency are the main reasons for the low adoption of password managers. Users of password managers have trust and security concerns, while there are a few issues with the user interfaces and functions of password managers. Full article
(This article belongs to the Special Issue Security and Privacy in Social Networks and Solutions)
Show Figures

Graphical abstract

18 pages, 2741 KiB  
Article
An Improved Deep Belief Network Prediction Model Based on Knowledge Transfer
by Yue Zhang and Fangai Liu
Future Internet 2020, 12(11), 188; https://doi.org/10.3390/fi12110188 - 29 Oct 2020
Cited by 8 | Viewed by 2506
Abstract
A deep belief network (DBN) is a powerful generative model based on unlabeled data. However, it is difficult to quickly determine the best network structure and gradient dispersion in traditional DBN. This paper proposes an improved deep belief network (IDBN): first, the basic [...] Read more.
A deep belief network (DBN) is a powerful generative model based on unlabeled data. However, it is difficult to quickly determine the best network structure and gradient dispersion in traditional DBN. This paper proposes an improved deep belief network (IDBN): first, the basic DBN structure is pre-trained and the learned weight parameters are fixed; secondly, the learned weight parameters are transferred to the new neuron and hidden layer through the method of knowledge transfer, thereby constructing the optimal network width and depth of DBN; finally, the top-down layer-by-layer partial least squares regression method is used to fine-tune the weight parameters obtained by the pre-training, which avoids the traditional fine-tuning problem based on the back-propagation algorithm. In order to verify the prediction performance of the model, this paper conducts benchmark experiments on the Movielens-20M (ML-20M) and Last.fm-1k (LFM-1k) public data sets. Compared with other traditional algorithms, IDBN is better than other fixed models in terms of prediction performance and training time. The proposed IDBN model has higher prediction accuracy and convergence speed. Full article
(This article belongs to the Special Issue Future Networks: Latest Trends and Developments)
Show Figures

Figure 1

20 pages, 2737 KiB  
Article
A Comparative Analysis of Machine Learning Techniques for Cyberbullying Detection on Twitter
by Amgad Muneer and Suliman Mohamed Fati
Future Internet 2020, 12(11), 187; https://doi.org/10.3390/fi12110187 - 29 Oct 2020
Cited by 102 | Viewed by 11309
Abstract
The advent of social media, particularly Twitter, raises many issues due to a misunderstanding regarding the concept of freedom of speech. One of these issues is cyberbullying, which is a critical global issue that affects both individual victims and societies. Many attempts have [...] Read more.
The advent of social media, particularly Twitter, raises many issues due to a misunderstanding regarding the concept of freedom of speech. One of these issues is cyberbullying, which is a critical global issue that affects both individual victims and societies. Many attempts have been introduced in the literature to intervene in, prevent, or mitigate cyberbullying; however, because these attempts rely on the victims’ interactions, they are not practical. Therefore, detection of cyberbullying without the involvement of the victims is necessary. In this study, we attempted to explore this issue by compiling a global dataset of 37,373 unique tweets from Twitter. Moreover, seven machine learning classifiers were used, namely, Logistic Regression (LR), Light Gradient Boosting Machine (LGBM), Stochastic Gradient Descent (SGD), Random Forest (RF), AdaBoost (ADB), Naive Bayes (NB), and Support Vector Machine (SVM). Each of these algorithms was evaluated using accuracy, precision, recall, and F1 score as the performance metrics to determine the classifiers’ recognition rates applied to the global dataset. The experimental results show the superiority of LR, which achieved a median accuracy of around 90.57%. Among the classifiers, logistic regression achieved the best F1 score (0.928), SGD achieved the best precision (0.968), and SVM achieved the best recall (1.00). Full article
(This article belongs to the Special Issue Artificial Intelligence and Machine Learning in Cybercrime Detection)
Show Figures

Figure 1

18 pages, 1543 KiB  
Article
An Analysis of the Supply of Open Government Data
by Alan Ponce and Raul Alberto Ponce Rodriguez
Future Internet 2020, 12(11), 186; https://doi.org/10.3390/fi12110186 - 29 Oct 2020
Cited by 5 | Viewed by 2678
Abstract
An index of the release of open government data, published in 2016 by the Open Knowledge Foundation, shows that there is significant variability in the country’s supply of this public good. What explains these cross-country differences? Adopting an interdisciplinary approach based on data [...] Read more.
An index of the release of open government data, published in 2016 by the Open Knowledge Foundation, shows that there is significant variability in the country’s supply of this public good. What explains these cross-country differences? Adopting an interdisciplinary approach based on data science and economic theory, we developed the following research workflow. First, we gather, clean, and merge different datasets released by institutions such as the Open Knowledge Foundation, World Bank, United Nations, World Economic Forum, Transparency International, Economist Intelligence Unit, and International Telecommunication Union. Then, we conduct feature extraction and variable selection founded on economic domain knowledge. Next, we perform several linear regression models, testing whether cross-country differences in the supply of open government data can be explained by differences in the country’s economic, social, and institutional structures. Our analysis provides evidence that the country’s civil liberties, government transparency, quality of democracy, efficiency of government intervention, economies of scale in the provision of public goods, and the size of the economy are statistically significant to explain the cross-country differences in the supply of open government data. Our analysis also suggests that political participation, sociodemographic characteristics, and demographic and global income distribution dummies do not help to explain the country’s supply of open government data. In summary, we show that cross-country differences in governance, social institutions, and the size of the economy can explain the global distribution of open government data. Full article
(This article belongs to the Special Issue Data Science and Knowledge Discovery)
Show Figures

Figure 1

18 pages, 422 KiB  
Article
A MILP Model for a Byzantine Fault Tolerant Blockchain Consensus
by Vitor Nazário Coelho, Rodolfo Pereira Araújo, Haroldo Gambini Santos, Wang Yong Qiang and Igor Machado Coelho
Future Internet 2020, 12(11), 185; https://doi.org/10.3390/fi12110185 - 29 Oct 2020
Cited by 1 | Viewed by 4815
Abstract
Mixed-integer mathematical programming has been widely used to model and solve challenging optimization problems. One interesting feature of this technique is the ability to prove the optimality of the achieved solution, for many practical scenarios where a linear programming model can be devised. [...] Read more.
Mixed-integer mathematical programming has been widely used to model and solve challenging optimization problems. One interesting feature of this technique is the ability to prove the optimality of the achieved solution, for many practical scenarios where a linear programming model can be devised. This paper explores its use to model very strong Byzantine adversaries, in the context of distributed consensus systems. In particular, we apply the proposed technique to find challenging adversarial conditions on a state-of-the-art blockchain consensus: the Neo dBFT. Neo Blockchain has been using the dBFT algorithm since its foundation, but, due to the complexity of the algorithm, it is challenging to devise definitive algebraic proofs that guarantee safety/liveness of the system (and adjust for every change proposed by the community). Core developers have to manually devise and explore possible adversarial attacks scenarios as an exhaustive task. The proposed multi-objective model is intended to assist the search of possible faulty scenario, which includes three objective functions that can be combined as a maximization problem for testing one-block finality or a minimization problem for ensuring liveness. Automated graphics help developers to visually observe attack conditions and to quickly find a solution. This paper proposes an exact adversarial model that explores current limits for practical blockchain consensus applications such as dBFT, with ideas that can also be extended to other decentralized ledger technologies. Full article
Show Figures

Figure 1

17 pages, 873 KiB  
Article
Browser Forensic Investigations of WhatsApp Web Utilizing IndexedDB Persistent Storage
by Furkan Paligu and Cihan Varol
Future Internet 2020, 12(11), 184; https://doi.org/10.3390/fi12110184 - 28 Oct 2020
Cited by 8 | Viewed by 8103
Abstract
Digital Evidence is becoming an indispensable factor in most legal cases. However, technological advancements that lead to artifact complexity, are forcing investigators to create sophisticated connections between the findings and the suspects for admissibility of evidence in court. This paper scrutinizes whether IndexedDB, [...] Read more.
Digital Evidence is becoming an indispensable factor in most legal cases. However, technological advancements that lead to artifact complexity, are forcing investigators to create sophisticated connections between the findings and the suspects for admissibility of evidence in court. This paper scrutinizes whether IndexedDB, an emerging browser technology, can be a source of digital evidence to provide additional and correlating support for traditional investigation methods. It particularly focuses on the artifacts of the worldwide popular application, WhatsApp. A single case pretest–posttest quasi experiment is applied with WhatsApp Messenger and Web Application to populate and investigate artifacts in IndexedDB storage of Google Chrome. The findings are characterized and presented with their potential to be utilized in forensic investigation verifications. The storage locations of the artifacts are laid out and operations of extraction, conversion and presentation are systematized. Additionally, a proof of concept tool is developed for demonstration. The results show that WhatsApp Web IndexedDB storage can be employed for time frame analysis, demonstrating its value in evidence verification. Full article
(This article belongs to the Special Issue Information and Future Internet Security, Trust and Privacy)
Show Figures

Figure 1

20 pages, 1754 KiB  
Article
A Knowledge-Driven Multimedia Retrieval System Based on Semantics and Deep Features
by Antonio Maria Rinaldi, Cristiano Russo and Cristian Tommasino
Future Internet 2020, 12(11), 183; https://doi.org/10.3390/fi12110183 - 28 Oct 2020
Cited by 12 | Viewed by 2513
Abstract
In recent years the information user needs have been changed due to the heterogeneity of web contents which increasingly involve in multimedia contents. Although modern search engines provide visual queries, it is not easy to find systems that allow searching from a particular [...] Read more.
In recent years the information user needs have been changed due to the heterogeneity of web contents which increasingly involve in multimedia contents. Although modern search engines provide visual queries, it is not easy to find systems that allow searching from a particular domain of interest and that perform such search by combining text and visual queries. Different approaches have been proposed during years and in the semantic research field many authors proposed techniques based on ontologies. On the other hand, in the context of image retrieval systems techniques based on deep learning have obtained excellent results. In this paper we presented novel approaches for image semantic retrieval and a possible combination for multimedia document analysis. Several results have been presented to show the performance of our approach compared with literature baselines. Full article
(This article belongs to the Special Issue Data Science and Knowledge Discovery)
Show Figures

Figure 1

12 pages, 5817 KiB  
Article
Paranoid Transformer: Reading Narrative of Madness as Computational Approach to Creativity
by Yana Agafonova, Alexey Tikhonov and Ivan P. Yamshchikov
Future Internet 2020, 12(11), 182; https://doi.org/10.3390/fi12110182 - 27 Oct 2020
Cited by 5 | Viewed by 3579
Abstract
This paper revisits the receptive theory in the context of computational creativity. It presents a case study of a Paranoid Transformer—a fully autonomous text generation engine with raw output that could be read as the narrative of a mad digital persona without any [...] Read more.
This paper revisits the receptive theory in the context of computational creativity. It presents a case study of a Paranoid Transformer—a fully autonomous text generation engine with raw output that could be read as the narrative of a mad digital persona without any additional human post-filtering. We describe technical details of the generative system, provide examples of output, and discuss the impact of receptive theory, chance discovery, and simulation of fringe mental state on the understanding of computational creativity. Full article
(This article belongs to the Special Issue Natural Language Engineering: Methods, Tasks and Applications)
Show Figures

Figure 1

16 pages, 484 KiB  
Review
Digital Irrigated Agriculture: Towards a Framework for Comprehensive Analysis of Decision Processes under Uncertainty
by Francesco Cavazza, Francesco Galioto, Meri Raggi and Davide Viaggi
Future Internet 2020, 12(11), 181; https://doi.org/10.3390/fi12110181 - 26 Oct 2020
Cited by 4 | Viewed by 2058
Abstract
Several studies address the topic of Information and Communication Technologies (ICT) adoption in irrigated agriculture. Many of these studies testify on the growing importance of ICT in influencing the evolution of the sector, especially by bringing down information barriers. While the potentialities of [...] Read more.
Several studies address the topic of Information and Communication Technologies (ICT) adoption in irrigated agriculture. Many of these studies testify on the growing importance of ICT in influencing the evolution of the sector, especially by bringing down information barriers. While the potentialities of such technologies are widely investigated and confirmed, there is still a gap in understanding and modeling decisions on ICT information implementation. This gap concerns, in particular, accounting for all the aspects of uncertainty which are mainly due to a lack of knowledge on the reliability of ICT and on the errors of ICT-information. Overall, such uncertainties might affect Decision Makers’ (DM’s) behavior and hamper ICT uptake. To support policy makers in the designing of uncertainty-management policies for the achievement of the benefits of a digital irrigated agriculture, we further investigated the topic of uncertainty modelling in ICT uptake decisions. To do so, we reviewed the economic literature on ambiguity, in the context of the wider literature on decision making under uncertainty in order to explore its potential for better modeling ICT uptake decisions. Findings from the discussed literature confirm the capabilities of this approach to yield a deeper understanding of decision processes when the reliability of ICT is unknown and provides better insights on how behavioral barriers to the achievement of potential ICT-benefits can be overcome. Policy implications to accompany the sector in the digitalization process include mainly: (a) defining new approaches for ICT-developers to tailor platforms to answer heterogeneous DMs’ needs; (b) establish uncertainty-management policies complementary to DM tools adoption. Full article
(This article belongs to the Special Issue Artificial Intelligence (AI) in Agriculture)
Show Figures

Figure 1

19 pages, 1625 KiB  
Article
Ensemble Classifiers for Network Intrusion Detection Using a Novel Network Attack Dataset
by Ahmed Mahfouz, Abdullah Abuhussein, Deepak Venugopal and Sajjan Shiva
Future Internet 2020, 12(11), 180; https://doi.org/10.3390/fi12110180 - 26 Oct 2020
Cited by 51 | Viewed by 3633
Abstract
Due to the extensive use of computer networks, new risks have arisen, and improving the speed and accuracy of security mechanisms has become a critical need. Although new security tools have been developed, the fast growth of malicious activities continues to be a [...] Read more.
Due to the extensive use of computer networks, new risks have arisen, and improving the speed and accuracy of security mechanisms has become a critical need. Although new security tools have been developed, the fast growth of malicious activities continues to be a pressing issue that creates severe threats to network security. Classical security tools such as firewalls are used as a first-line defense against security problems. However, firewalls do not entirely or perfectly eliminate intrusions. Thus, network administrators rely heavily on intrusion detection systems (IDSs) to detect such network intrusion activities. Machine learning (ML) is a practical approach to intrusion detection that, based on data, learns how to differentiate between abnormal and regular traffic. This paper provides a comprehensive analysis of some existing ML classifiers for identifying intrusions in network traffic. It also produces a new reliable dataset called GTCS (Game Theory and Cyber Security) that matches real-world criteria and can be used to assess the performance of the ML classifiers in a detailed experimental evaluation. Finally, the paper proposes an ensemble and adaptive classifier model composed of multiple classifiers with different learning paradigms to address the issue of the accuracy and false alarm rate in IDSs. Our classifiers show high precision and recall rates and use a comprehensive set of features compared to previous work. Full article
(This article belongs to the Special Issue Artificial Intelligence and Machine Learning in Cybercrime Detection)
Show Figures

Figure 1

18 pages, 3161 KiB  
Review
From Mirrors to Free-Space Optical Communication—Historical Aspects in Data Transmission
by Magdalena Garlinska, Agnieszka Pregowska, Karol Masztalerz and Magdalena Osial
Future Internet 2020, 12(11), 179; https://doi.org/10.3390/fi12110179 - 22 Oct 2020
Cited by 33 | Viewed by 5785
Abstract
Fast communication is of high importance. Recently, increased data demand and crowded radio frequency spectrum have become crucial issues. Free-Space Optical Communication (FSOC) has diametrically changed the way people exchange information. As an alternative to wire communication systems, it allows efficient voice, video, [...] Read more.
Fast communication is of high importance. Recently, increased data demand and crowded radio frequency spectrum have become crucial issues. Free-Space Optical Communication (FSOC) has diametrically changed the way people exchange information. As an alternative to wire communication systems, it allows efficient voice, video, and data transmission using a medium like air. Due to its large bandwidth, FSOC can be used in various applications and has therefore become an important part of our everyday life. The main advantages of FSOC are a high speed, cost savings, compact structures, low power, energy efficiency, a maximal transfer capacity, and applicability. The rapid development of the high-speed connection technology allows one to reduce the repair downtime and gives the ability to quickly establish a backup network in an emergency. Unfortunately, FSOC is susceptible to disruption due to atmospheric conditions or direct sunlight. Here, we briefly discuss Free-Space Optical Communication from mirrors and optical telegraphs to modern wireless systems and outline the future development directions of optical communication. Full article
Show Figures

Figure 1

11 pages, 402 KiB  
Article
Learning a Hierarchical Global Attention for Image Classification
by Kerang Cao, Jingyu Gao, Kwang-nam Choi and Lini Duan
Future Internet 2020, 12(11), 178; https://doi.org/10.3390/fi12110178 - 22 Oct 2020
Cited by 2 | Viewed by 1739
Abstract
To classify the image material on the internet, the deep learning methodology, especially deep neural network, is the most optimal and costliest method of all computer vision methods. Convolutional neural networks (CNNs) learn a comprehensive feature representation by exploiting local information with a [...] Read more.
To classify the image material on the internet, the deep learning methodology, especially deep neural network, is the most optimal and costliest method of all computer vision methods. Convolutional neural networks (CNNs) learn a comprehensive feature representation by exploiting local information with a fixed receptive field, demonstrating distinguished capacities on image classification. Recent works concentrate on efficient feature exploration, which neglect the global information for holistic consideration. There is large effort to reduce the computational costs of deep neural networks. Here, we provide a hierarchical global attention mechanism that improve the network representation with restricted increase of computation complexity. Different from nonlocal-based methods, the hierarchical global attention mechanism requires no matrix multiplication and can be flexibly applied in various modern network designs. Experimental results demonstrate that proposed hierarchical global attention mechanism can conspicuously improve the image classification precision—a reduction of 7.94% and 16.63% percent in Top 1 and Top 5 errors separately—with little increase of computation complexity (6.23%) in comparison to competing approaches. Full article
(This article belongs to the Section Big Data and Augmented Intelligence)
Show Figures

Figure 1

Previous Issue
Next Issue
Back to TopTop