Next Issue
Volume 12, April
Previous Issue
Volume 12, February
 
 

Information, Volume 12, Issue 3 (March 2021) – 44 articles

Cover Story (view full-size image): Human navigation is a complex cognitive problem, which is not well understood today despite significant progress in neurosciences and the discovery of important spatially-correlated neurons supporting space awareness and navigation. This paper proposes an original bio-inspired navigation model based on place cells, grid cells, and head direction cells. The model emulates the behavior of these cells and can construct different strategies for indoor and outdoor navigation. The proposed model may be used for autonomous robotic platforms as well as in assistive devices for visually impaired people. View this paper
  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Reader to open them.
Order results
Result details
Section
Select all
Export citation of selected articles as:
16 pages, 408 KiB  
Article
Foresight of Tourism in Kazakhstan: Experience Economy
by Nurzhan Kenzhebekov, Yerlan Zhailauov, Emil Velinov, Yelena Petrenko and Igor Denisov
Information 2021, 12(3), 138; https://doi.org/10.3390/info12030138 - 23 Mar 2021
Cited by 7 | Viewed by 5751
Abstract
The paper sheds a light on the interconnections between touristic sector development and regional development in Kazakhstan. The paper covers analysis of the current competitiveness of the touristic destinations in Kazakhstan. Based on qualitative and quantitative research, the study shows that there is [...] Read more.
The paper sheds a light on the interconnections between touristic sector development and regional development in Kazakhstan. The paper covers analysis of the current competitiveness of the touristic destinations in Kazakhstan. Based on qualitative and quantitative research, the study shows that there is a huge need for a transformation in marketing communications tools in order to increase the competitiveness and image of Kazakhstani tourism. The study provides potential scenarios and solutions to increase touristic attractiveness, which would lead to enticing more investors and increase tourism capacity and potential. Also, the paper provides insights in ecotourism and the regional economy by outlining the older and newer managerial and governmental approaches in supporting the entire tourism sector in Kazakhstan. Full article
(This article belongs to the Special Issue Enhancement of Local Resources through Tourism Activities)
Show Figures

Figure 1

20 pages, 2977 KiB  
Article
Representation of Slovak Research Information (A Case Study)
by Danica Zendulková, Boris Rysuľa and Andrea Putalová
Information 2021, 12(3), 137; https://doi.org/10.3390/info12030137 - 22 Mar 2021
Cited by 1 | Viewed by 2037
Abstract
In the light of the increasing importance of the societal impact of research, this article attempts to address the question as to how social sciences and humanities (SSH) research outputs from 2019 are represented in Slovak research portfolios in comparison with those of [...] Read more.
In the light of the increasing importance of the societal impact of research, this article attempts to address the question as to how social sciences and humanities (SSH) research outputs from 2019 are represented in Slovak research portfolios in comparison with those of the EU-28 and the world. The data used for the analysis originate from the R&D SK CRIS and bibliographic Central Register of Publication Activities (CREPČ) national databases, and WoS Core Collection/InCites. The research data were appropriate for the analysis at the time they were structured, on the national level; of high quality and consistency; and covering as many components as possible and in mutual relations. The data resources should enable the research outputs to be assigned to research categories. The analysis prompts the conclusion that social sciences and humanities research outputs in Slovakia in 2019 are appropriately represented and in general show an increasing trend. This can be documented by the proportion represented by the SSH research projects and other entities involved in the overall Slovak research outputs, and even the higher ratio of SSH research publications in comparison with the EU-28 and the world. Recommendations of a technical character include research data management, data quality, and the integration of individual systems and available analytical tools. Full article
(This article belongs to the Special Issue ICT Enhanced Social Sciences and Humanities)
Show Figures

Figure 1

17 pages, 2764 KiB  
Article
Research on Automatic Question Answering of Generative Knowledge Graph Based on Pointer Network
by Shuang Liu, Nannan Tan, Yaqian Ge and Niko Lukač
Information 2021, 12(3), 136; https://doi.org/10.3390/info12030136 - 21 Mar 2021
Cited by 3 | Viewed by 3570
Abstract
Question-answering systems based on knowledge graphs are extremely challenging tasks in the field of natural language processing. Most of the existing Chinese Knowledge Base Question Answering(KBQA) can only return the knowledge stored in the knowledge base by extractive methods. Nevertheless, this processing does [...] Read more.
Question-answering systems based on knowledge graphs are extremely challenging tasks in the field of natural language processing. Most of the existing Chinese Knowledge Base Question Answering(KBQA) can only return the knowledge stored in the knowledge base by extractive methods. Nevertheless, this processing does not conform to the reading habits and cannot solve the Out-of-vocabulary(OOV) problem. In this paper, a new generative question answering method based on knowledge graph is proposed, including three parts of knowledge vocabulary construction, data pre-processing, and answer generation. In the word list construction, BiLSTM-CRF is used to identify the entity in the source text, finding the triples contained in the entity, counting the word frequency, and constructing it. In the part of data pre-processing, a pre-trained language model BERT combining word frequency semantic features is adopted to obtain word vectors. In the answer generation part, one combination of a vocabulary constructed by the knowledge graph and a pointer generator network(PGN) is proposed to point to the corresponding entity for generating answer. The experimental results show that the proposed method can achieve superior performance on WebQA datasets than other methods. Full article
(This article belongs to the Collection Knowledge Graphs for Search and Recommendation)
Show Figures

Figure 1

25 pages, 1019 KiB  
Article
Information Retrieval and Knowledge Organization: A Perspective from the Philosophy of Science
by Birger Hjørland
Information 2021, 12(3), 135; https://doi.org/10.3390/info12030135 - 20 Mar 2021
Cited by 25 | Viewed by 6984
Abstract
Information retrieval (IR) is about making systems for finding documents or information. Knowledge organization (KO) is the field concerned with indexing, classification, and representing documents for IR, browsing, and related processes, whether performed by humans or computers. The field of IR is today [...] Read more.
Information retrieval (IR) is about making systems for finding documents or information. Knowledge organization (KO) is the field concerned with indexing, classification, and representing documents for IR, browsing, and related processes, whether performed by humans or computers. The field of IR is today dominated by search engines like Google. An important difference between KO and IR as research fields is that KO attempts to reflect knowledge as depicted by contemporary scholarship, in contrast to IR, which is based on, for example, “match” techniques, popularity measures or personalization principles. The classification of documents in KO mostly aims at reflecting the classification of knowledge in the sciences. Books about birds, for example, mostly reflect (or aim at reflecting) how birds are classified in ornithology. KO therefore requires access to the adequate subject knowledge; however, this is often characterized by disagreements. At the deepest layer, such disagreements are based on philosophical issues best characterized as “paradigms”. No IR technology and no system of knowledge organization can ever be neutral in relation to paradigmatic conflicts, and therefore such philosophical problems represent the basis for the study of IR and KO. Full article
(This article belongs to the Special Issue Knowledge Organization and the Disciplines of Information)
Show Figures

Figure 1

31 pages, 1865 KiB  
Article
Unsupervised DNF Blocking for Efficient Linking of Knowledge Graphs and Tables
by Mayank Kejriwal
Information 2021, 12(3), 134; https://doi.org/10.3390/info12030134 - 19 Mar 2021
Cited by 2 | Viewed by 2928
Abstract
Entity Resolution (ER) is the problem of identifying co-referent entity pairs across datasets, including knowledge graphs (KGs). ER is an important prerequisite in many applied KG search and analytics pipelines, with a typical workflow comprising two steps. In the first ’blocking’ step, entities [...] Read more.
Entity Resolution (ER) is the problem of identifying co-referent entity pairs across datasets, including knowledge graphs (KGs). ER is an important prerequisite in many applied KG search and analytics pipelines, with a typical workflow comprising two steps. In the first ’blocking’ step, entities are mapped to blocks. Blocking is necessary for preempting comparing all possible pairs of entities, as (in the second ‘similarity’ step) only entities within blocks are paired and compared, allowing for significant computational savings with a minimal loss of performance. Unfortunately, learning a blocking scheme in an unsupervised fashion is a non-trivial problem, and it has not been properly explored for heterogeneous, semi-structured datasets, such as are prevalent in industrial and Web applications. This article presents an unsupervised algorithmic pipeline for learning Disjunctive Normal Form (DNF) blocking schemes on KGs, as well as structurally heterogeneous tables that may not share a common schema. We evaluate the approach on six real-world dataset pairs, and show that it is competitive with supervised and semi-supervised baselines. Full article
(This article belongs to the Collection Knowledge Graphs for Search and Recommendation)
Show Figures

Figure 1

10 pages, 323 KiB  
Article
Pre-Training on Mixed Data for Low-Resource Neural Machine Translation
by Wenbo Zhang, Xiao Li, Yating Yang and Rui Dong
Information 2021, 12(3), 133; https://doi.org/10.3390/info12030133 - 18 Mar 2021
Cited by 4 | Viewed by 2522
Abstract
The pre-training fine-tuning mode has been shown to be effective for low resource neural machine translation. In this mode, pre-training models trained on monolingual data are used to initiate translation models to transfer knowledge from monolingual data into translation models. In recent years, [...] Read more.
The pre-training fine-tuning mode has been shown to be effective for low resource neural machine translation. In this mode, pre-training models trained on monolingual data are used to initiate translation models to transfer knowledge from monolingual data into translation models. In recent years, pre-training models usually take sentences with randomly masked words as input, and are trained by predicting these masked words based on unmasked words. In this paper, we propose a new pre-training method that still predicts masked words, but randomly replaces some of the unmasked words in the input with their translation words in another language. The translation words are from bilingual data, so that the data for pre-training contains both monolingual data and bilingual data. We conduct experiments on Uyghur-Chinese corpus to evaluate our method. The experimental results show that our method can make the pre-training model have a better generalization ability and help the translation model to achieve better performance. Through a word translation task, we also demonstrate that our method enables the embedding of the translation model to acquire more alignment knowledge. Full article
(This article belongs to the Special Issue Neural Natural Language Generation)
Show Figures

Figure 1

23 pages, 324 KiB  
Article
Adoption of Social Media in Socio-Technical Systems: A Survey
by Gianfranco Lombardo, Monica Mordonini and Michele Tomaiuolo
Information 2021, 12(3), 132; https://doi.org/10.3390/info12030132 - 18 Mar 2021
Cited by 10 | Viewed by 3833
Abstract
This article describes the current landscape in the fields of social media and socio-technical systems. In particular, it analyzes the different ways in which social media are adopted in organizations, workplaces, educational and smart environments. One interesting aspect of this integration, is the [...] Read more.
This article describes the current landscape in the fields of social media and socio-technical systems. In particular, it analyzes the different ways in which social media are adopted in organizations, workplaces, educational and smart environments. One interesting aspect of this integration, is the use of social media for members’ participation and access to the processes and services of their organization. Those services cover many different types of daily routines and life activities, such as health, education, transports. In this survey, we compare and classify current research works according to multiple features, including: the use of Social Network Analysis and Social Capital models, users’ motivations for participation and organizational costs, adoption of the social media platform from below. Our results show that many of these current systems are developed without taking into proper consideration the social structures and processes, with some notable and positive exceptions. Full article
(This article belongs to the Special Issue The Integration of Digital and Social Systems)
13 pages, 290 KiB  
Article
Pluralism of News and Social Plurality in the Colombian Local Media
by Pedro Molina-Rodríguez-Navas and Johamna Muñoz Lalinde
Information 2021, 12(3), 131; https://doi.org/10.3390/info12030131 - 18 Mar 2021
Cited by 1 | Viewed by 2358
Abstract
Information on the management of local administrations and the actions of the political leaders who govern them is essential for citizens to exercise their political rights. It is therefore necessary for these administrations to provide quality information that the media can use as [...] Read more.
Information on the management of local administrations and the actions of the political leaders who govern them is essential for citizens to exercise their political rights. It is therefore necessary for these administrations to provide quality information that the media can use as sources for their news stories. At the same time, these media outlets have to compare and report while taking into account the plurality of their audiences. However, in local settings, collusion exists between political power and media owners that restricts the plurality of news, favoring the dominant political interests and hiding the demands, interests and protagonism of other social actors. We study this problem in the Caribbean Region of Colombia. We analyze the information that the town halls of the main cities in the region provide to the media and how the largest print newspapers and main regional television news broadcasters report on local politics. We compare these news stories to establish whether there is a plurality of news reports. In addition, we analyze the key elements of the news items disseminated by private media outlets to establish whether they report a limited vision of reality: the topics covered, the protagonists referred to in headlines and news stories, and the sources against which the news and images are compared. The results reveal shortcomings that result in similar information between public information and private media content, thus limiting the plurality of news reports and the social protagonism of other social agents. Ultimately, this hinders quality journalism that satisfies the interests of citizens. Full article
(This article belongs to the Special Issue Decentralization and New Technologies for Social Media)
14 pages, 4067 KiB  
Article
From 2D to VR Film: A Research on the Load of Different Cutting Rates Based on EEG Data Processing
by Feng Tian, Yan Zhang and Yingjie Li
Information 2021, 12(3), 130; https://doi.org/10.3390/info12030130 - 17 Mar 2021
Cited by 4 | Viewed by 2138
Abstract
Focusing on virtual reality (VR) and film cutting, this study compared and evaluated the effect of visual mode (2D, VR) and cutting rate (fast, medium, slow) on a load, to make an attempt for VR research to enter the cognitive field. This study [...] Read more.
Focusing on virtual reality (VR) and film cutting, this study compared and evaluated the effect of visual mode (2D, VR) and cutting rate (fast, medium, slow) on a load, to make an attempt for VR research to enter the cognitive field. This study uses a 2 × 3 experimental research design. Forty participants were divided into one of two groups randomly and watched films with three cutting rates. The subjective and objective data were collected during the experiment. The objective results confirm that VR films bring more powerful alpha, beta, theta wave activities, and bring a greater load. The subjective results confirm that the fast cutting rate brings a greater load. These results provide a theoretical support for further exploring the evaluation methods and standards of VR films and improving the viewing experience in the future. Full article
Show Figures

Figure 1

18 pages, 475 KiB  
Article
A Data-Driven Approach for Video Game Playability Analysis Based on Players’ Reviews
by Xiaozhou Li, Zheying Zhang and Kostas Stefanidis
Information 2021, 12(3), 129; https://doi.org/10.3390/info12030129 - 17 Mar 2021
Cited by 14 | Viewed by 4905
Abstract
Playability is a key concept in game studies defining the overall quality of video games. Although its definition and frameworks are widely studied, methods to analyze and evaluate the playability of video games are still limited. Using heuristics for playability evaluation has long [...] Read more.
Playability is a key concept in game studies defining the overall quality of video games. Although its definition and frameworks are widely studied, methods to analyze and evaluate the playability of video games are still limited. Using heuristics for playability evaluation has long been the mainstream with its usefulness in detecting playability issues during game development well acknowledged. However, such a method falls short in evaluating the overall playability of video games as published software products and understanding the genuine needs of players. Thus, this paper proposes an approach to analyze the playability of video games by mining a large number of players’ opinions from their reviews. Guided by the game-as-system definition of playability, the approach is a data mining pipeline where sentiment analysis, binary classification, multi-label text classification, and topic modeling are sequentially performed. We also conducted a case study on a particular video game product with its 99,993 player reviews on the Steam platform. The results show that such a review-data-driven method can effectively evaluate the perceived quality of video games and enumerate their merits and defects in terms of playability. Full article
(This article belongs to the Special Issue Novel Methods and Applications in Natural Language Processing)
Show Figures

Figure 1

27 pages, 2843 KiB  
Article
Conceptualising a Cloud Business Intelligence Security Evaluation Framework for Small and Medium Enterprises in Small Towns of the Limpopo Province, South Africa
by Moses Moyo and Marianne Loock
Information 2021, 12(3), 128; https://doi.org/10.3390/info12030128 - 17 Mar 2021
Cited by 4 | Viewed by 3401
Abstract
The purpose of this study was to investigate security evaluation practices among small and medium enterprises (SMEs) in small South African towns when adopting cloud business intelligence (Cloud BI). The study employed a quantitative design in which 57 SMEs from the Limpopo Province [...] Read more.
The purpose of this study was to investigate security evaluation practices among small and medium enterprises (SMEs) in small South African towns when adopting cloud business intelligence (Cloud BI). The study employed a quantitative design in which 57 SMEs from the Limpopo Province were surveyed using an online questionnaire. The study found that: (1) the level of cybersecurity threats awareness among decision-makers was high; (2) decision-makers preferred simple checklists and guidelines over conventional security policies, standards, and frameworks; and (3) decision-makers considered financial risks, data and application security, and cloud service provider reliability as the main aspects to consider when evaluating Cloud BI applications. The study conceptualised a five-component security framework for evaluating Cloud BI applications, integrating key aspects of conventional security frameworks and methodologies. The framework was validated for relevance by IT specialists and acceptance by SME owners. The Spearman correlational test for relevance and acceptance of the proposed framework was found to be highly significant at p < 0.05. The study concluded that SMEs require user-friendly frameworks for evaluating Cloud BI applications. The major contribution of this study is the security evaluation framework conceptualised from the best practices of existing security standards and frameworks for use by decision-makers from small towns in Limpopo. The study recommends that future research consider end-user needs when customising or proposing new solutions for SMEs in small towns. Full article
(This article belongs to the Special Issue Cyber Resilience)
Show Figures

Figure 1

19 pages, 335 KiB  
Article
Evaluating Consumers’ Willingness to Pay for Delay Compensation Services in Intra-City Delivery—A Value Optimization Study Using Choice
by Ruixu Pan, Yujie Huang and Xiongwu Xiao
Information 2021, 12(3), 127; https://doi.org/10.3390/info12030127 - 16 Mar 2021
Cited by 1 | Viewed by 2838
Abstract
Intra-city delivery has developed rapidly along with the expansion of the logistics industry. Timely delivery is one of the main requirements of consumers and has become a major challenge to delivery service providers. To compensate for the adverse effects of delivery delays, platforms [...] Read more.
Intra-city delivery has developed rapidly along with the expansion of the logistics industry. Timely delivery is one of the main requirements of consumers and has become a major challenge to delivery service providers. To compensate for the adverse effects of delivery delays, platforms have launched delay compensation services for consumers who order. This study quantitatively evaluated consumer perception of the delay compensation service in intra-city deliveries using a choice experiment. We explored how different attributes of the delay compensation service plan affect consumer preference and their willingness to pay for the services. These service attributes are “delay probability display”, “compensation amount”, “compensation method”, “penalty method for riders”, and “one-time order price”. Using a multinomial logit model to analyze the questionnaire results, the respondents showed a positive preference for on-time delivery probability display, progressive compensation amount, and cash compensation. The results also show that the respondents opposed the penalty scheme where the riders would bear the compensation costs. Positive preference attributes are conducive to enhancing consumers’ willingness to order and pay for the program. Based on our findings and research conclusions, we proposed several recommendations to improve the delay compensation service program. Full article
(This article belongs to the Special Issue Data Analytics and Consumer Behavior)
18 pages, 2216 KiB  
Article
The Decline of User Experience in Transition from Automated Driving to Manual Driving
by Mikael Johansson, Mattias Mullaart Söderholm, Fjollë Novakazi and Annie Rydström
Information 2021, 12(3), 126; https://doi.org/10.3390/info12030126 - 16 Mar 2021
Cited by 4 | Viewed by 2663
Abstract
Automated driving technologies are rapidly being developed. However, until vehicles are fully automated, the control of the dynamic driving task will be shifted between the driver and automated driving system. This paper aims to explore how transitions from automated driving to manual driving [...] Read more.
Automated driving technologies are rapidly being developed. However, until vehicles are fully automated, the control of the dynamic driving task will be shifted between the driver and automated driving system. This paper aims to explore how transitions from automated driving to manual driving affect user experience and how that experience correlates to take-over performance. In the study 20 participants experienced using an automated driving system during rush-hour traffic in the San Francisco Bay Area, CA, USA. The automated driving system was available in congested traffic situations and when active, the participants could engage in non-driving related activities. The participants were interviewed afterwards regarding their experience of the transitions. The findings show that most of the participants experienced the transition from automated driving to manual driving as negative. Their user experience seems to be shaped by several reasons that differ in temporality and are derived from different phases during the transition process. The results regarding correlation between participants’ experience and take-over performance are inconclusive, but some trends were identified. The study highlights the need for new design solutions that do not only improve drivers’ take-over performance, but also enhance user experience during take-over requests from automated to manual driving. Full article
Show Figures

Figure 1

15 pages, 14889 KiB  
Article
Does Salience of Neighbor-Comparison Information Attract Attention and Conserve Energy? Eye-Tracking Experiment and Interview with Korean Local Apartment Residents
by Sunghee Choi
Information 2021, 12(3), 125; https://doi.org/10.3390/info12030125 - 15 Mar 2021
Viewed by 1565
Abstract
The purpose of this paper is to examine whether salience of neighbor comparison information attracts more attention from residents and consequently leads to significant energy conservation. An eye-tracking experiment on 54 residents in a local apartment complex in Korea found that the average [...] Read more.
The purpose of this paper is to examine whether salience of neighbor comparison information attracts more attention from residents and consequently leads to significant energy conservation. An eye-tracking experiment on 54 residents in a local apartment complex in Korea found that the average time of attention to the neighbor comparison information increased to 277 ms when the size of the information was four times larger and the information was located to the far left. However, the interviews with the subjects suggest that salience of the information is seemingly unrelated to energy conservation, because most of them did not agree with the social consensus that individuals need to refrain from consuming energy when they know that they have consumed more than the neighbor’s average. Utility data on 502 households in the apartments revealed that, of the households notified that they consumed more than their neighbors, only less than 50% reduced their energy consumption, which supports the interview results. Therefore, it was concluded that neighbor comparison information did not lead to significant energy conservation effects in the community, although salience of the information contributed to attracting more attention to the information. Unavailable household data remained as limitation to clarify the effect by households. Full article
Show Figures

Figure 1

22 pages, 1518 KiB  
Article
Numerical Markov Logic Network: A Scalable Probabilistic Framework for Hybrid Knowledge Inference
by Ping Zhong, Zhanhuai Li, Qun Chen, Boyi Hou and Murtadha Ahmed
Information 2021, 12(3), 124; https://doi.org/10.3390/info12030124 - 15 Mar 2021
Cited by 2 | Viewed by 2111
Abstract
In recent years, the Markov Logic Network (MLN) has emerged as a powerful tool for knowledge-based inference due to its ability to combine first-order logic inference and probabilistic reasoning. Unfortunately, current MLN solutions cannot efficiently support knowledge inference involving arithmetic expressions, which is [...] Read more.
In recent years, the Markov Logic Network (MLN) has emerged as a powerful tool for knowledge-based inference due to its ability to combine first-order logic inference and probabilistic reasoning. Unfortunately, current MLN solutions cannot efficiently support knowledge inference involving arithmetic expressions, which is required to model the interaction between logic relations and numerical values in many real applications. In this paper, we propose a probabilistic inference framework, called the Numerical Markov Logic Network (NMLN), to enable efficient inference of hybrid knowledge involving both logic and arithmetic expressions. We first introduce the hybrid knowledge rules, then define an inference model, and finally, present a technique based on convex optimization for efficient inference. Built on decomposable exp-loss function, the proposed inference model can process hybrid knowledge rules more effectively and efficiently than the existing MLN approaches. Finally, we empirically evaluate the performance of the proposed approach on real data. Our experiments show that compared to the state-of-the-art MLN solution, it can achieve better prediction accuracy while significantly reducing inference time. Full article
(This article belongs to the Special Issue What Is Information? (2020))
Show Figures

Figure 1

19 pages, 714 KiB  
Article
A Genealogical Analysis of Information and Technics
by J.J. Sylvia IV
Information 2021, 12(3), 123; https://doi.org/10.3390/info12030123 - 12 Mar 2021
Cited by 1 | Viewed by 3046
Abstract
This paper explores how the concepts of information and technics have been leveraged differently by a variety of philosophical and epistemological frameworks over time. Using the Foucauldian methodology of genealogical historiography, it analyzes how the use of these concepts have impacted the way [...] Read more.
This paper explores how the concepts of information and technics have been leveraged differently by a variety of philosophical and epistemological frameworks over time. Using the Foucauldian methodology of genealogical historiography, it analyzes how the use of these concepts have impacted the way we understand the world and what we can know about that world. As these concepts are so ingrained in contemporary technologies of the information age, understanding how these concepts have changed over time can help make clearer how they continue to impact our processes of subjectivation. Analysis reveals that the predominant understanding of information and technics today is based on a cybernetic approach that conceptualizes information as a resource. However, this analysis also reveals that Michel Foucault’s conceptualization of technics resonates with that of the Sophists, offering an opportunity to rethink contemporary conceptualizations of information and technics in a way that connects to posthuman philosophic systems that afford new approaches to communication and media studies. Full article
Show Figures

Figure 1

11 pages, 1126 KiB  
Article
Counterintelligence Technologies: An Exploratory Case Study of Preliminary Credibility Assessment Screening System in the Afghan National Defense and Security Forces
by João Reis, Marlene Amorim, Nuno Melão, Yuval Cohen and Joana Costa
Information 2021, 12(3), 122; https://doi.org/10.3390/info12030122 - 12 Mar 2021
Cited by 4 | Viewed by 2716
Abstract
The preliminary credibility assessment screening system (PCASS) is a US-based program, which is currently being implemented by intelligence units of the North Atlantic Treaty Organization (NATO) to make the initial screening of individuals suspected of infiltrating the Afghan National Defense and Security Forces [...] Read more.
The preliminary credibility assessment screening system (PCASS) is a US-based program, which is currently being implemented by intelligence units of the North Atlantic Treaty Organization (NATO) to make the initial screening of individuals suspected of infiltrating the Afghan National Defense and Security Forces (ANDSF). Sensors have been instrumental in the PCASS, leading to organizational change. The aim of this research is to describe how the ANDSF adapted to the implementation of PCASS, as well as implemented changes since the beginning of the program. To do so, we have conducted a qualitative, exploratory, and descriptive case study that allows one to understand, through the use of a series of data collection sources, a real-life phenomenon of which little is known. The results suggest that the sensors used in PCASS empower security forces with reliable technologies to identify and neutralize internal threats. It then becomes evident that the technological leadership that PCASS provides allows the developing of a relatively stable and consistent organizational change, fulfilling the objectives of the NATO and the ANDSF. Full article
(This article belongs to the Special Issue Big Data Integration and Intelligent Information Integration)
Show Figures

Figure 1

21 pages, 1669 KiB  
Article
A Novel Approach for Classification and Forecasting of Time Series in Particle Accelerators
by Sichen Li, Mélissa Zacharias, Jochem Snuverink, Jaime Coello de Portugal, Fernando Perez-Cruz, Davide Reggiani and Andreas Adelmann
Information 2021, 12(3), 121; https://doi.org/10.3390/info12030121 - 12 Mar 2021
Cited by 12 | Viewed by 2738
Abstract
The beam interruptions (interlocks) of particle accelerators, despite being necessary safety measures, lead to abrupt operational changes and a substantial loss of beam time. A novel time series classification approach is applied to decrease beam time loss in the High-Intensity Proton Accelerator complex [...] Read more.
The beam interruptions (interlocks) of particle accelerators, despite being necessary safety measures, lead to abrupt operational changes and a substantial loss of beam time. A novel time series classification approach is applied to decrease beam time loss in the High-Intensity Proton Accelerator complex by forecasting interlock events. The forecasting is performed through binary classification of windows of multivariate time series. The time series are transformed into Recurrence Plots which are then classified by a Convolutional Neural Network, which not only captures the inner structure of the time series but also uses the advances of image classification techniques. Our best-performing interlock-to-stable classifier reaches an Area under the ROC Curve value of 0.71±0.01 compared to 0.65±0.01 of a Random Forest model, and it can potentially reduce the beam time loss by 0.5±0.2 s per interlock. Full article
(This article belongs to the Special Issue Machine Learning and Accelerator Technology)
Show Figures

Figure 1

12 pages, 1859 KiB  
Article
Strategic Challenges of Human Resources Allocation in Industry 4.0
by Majid Ziaei Nafchi and Hana Mohelská
Information 2021, 12(3), 120; https://doi.org/10.3390/info12030120 - 11 Mar 2021
Cited by 10 | Viewed by 4291
Abstract
The emergence of the fourth industrial revolution (Industry 4.0, hereinafter I 4.0) has led to an entirely fresh approach to production, helping to enhance the key industrial processes and therefore increase the growth of labor productivity and competitiveness. Simultaneously, I 4.0 compels changes [...] Read more.
The emergence of the fourth industrial revolution (Industry 4.0, hereinafter I 4.0) has led to an entirely fresh approach to production, helping to enhance the key industrial processes and therefore increase the growth of labor productivity and competitiveness. Simultaneously, I 4.0 compels changes in the organization of work and influences the lives of employees. The paper intends to construct a model for predicting the allocation of human resources in the sectors of the national economy of the Czech Republic in connection with I 4.0. The model used in this research visualizes the shift of labor in the economic sectors of the Czech Republic from the year 2013 to the following years in the near future. The main contribution of this article is to show the growth of employment in the high-tech services sector, which will have an ascending trend. Full article
(This article belongs to the Special Issue Digitalized Economy, Society and Information Management)
Show Figures

Figure 1

14 pages, 2305 KiB  
Article
Malware Detection Based on Code Visualization and Two-Level Classification
by Vassilios Moussas and Antonios Andreatos
Information 2021, 12(3), 118; https://doi.org/10.3390/info12030118 - 11 Mar 2021
Cited by 18 | Viewed by 3804
Abstract
Malware creators generate new malicious software samples by making minor changes in previously generated code, in order to reuse malicious code, as well as to go unnoticed from signature-based antivirus software. As a result, various families of variations of the same initial code [...] Read more.
Malware creators generate new malicious software samples by making minor changes in previously generated code, in order to reuse malicious code, as well as to go unnoticed from signature-based antivirus software. As a result, various families of variations of the same initial code exist today. Visualization of compiled executables for malware analysis has been proposed several years ago. Visualization can greatly assist malware classification and requires neither disassembly nor code execution. Moreover, new variations of known malware families are instantly detected, in contrast to traditional signature-based antivirus software. This paper addresses the problem of identifying variations of existing malware visualized as images. A new malware detection system based on a two-level Artificial Neural Network (ANN) is proposed. The classification is based on file and image features. The proposed system is tested on the ‘Malimg’ dataset consisting of the visual representation of well-known malware families. From this set some important image features are extracted. Based on these features, the ANN is trained. Then, this ANN is used to detect and classify other samples of the dataset. Malware families creating a confusion are classified by a second level of ANNs. The proposed two-level ANN method excels in simplicity, accuracy, and speed; it is easy to implement and fast to run, thus it can be applied to antivirus software, smart firewalls, web applications, etc. Full article
(This article belongs to the Special Issue Cyberspace Security, Privacy & Forensics)
Show Figures

Figure 1

13 pages, 300 KiB  
Article
Bridging Ride and Play Comfort
by Zeliang Zhang, Kang Xiaohan, Mohd Nor Akmal Khalid and Hiroyuki Iida
Information 2021, 12(3), 119; https://doi.org/10.3390/info12030119 - 10 Mar 2021
Cited by 5 | Viewed by 2690
Abstract
The notion of comfort with respect to rides, such as roller coasters, is typically addressed from the perspective of a physical ride, where the convenience of transportation is redefined to minimize risk and maximize thrill. As a popular form of entertainment, roller coasters [...] Read more.
The notion of comfort with respect to rides, such as roller coasters, is typically addressed from the perspective of a physical ride, where the convenience of transportation is redefined to minimize risk and maximize thrill. As a popular form of entertainment, roller coasters sit at the nexus of rides and games, providing a suitable environment to measure both mental and physical experiences of rider comfort. In this paper, the way risk and comfort affect such experiences is investigated, and the connection between play comfort and ride comfort is explored. A roller coaster ride simulation is adopted as the target environment for this research, which combines the feeling of being thrill and comfort simultaneously. At the same time, this paper also expands research on roller coaster rides while bridging the rides and games via the analogy of the law of physics, a concept currently known as motion in mind. This study’s contribution involves a roller coaster ride model, which provides an extended understanding of the relationship between physical performance and the mental experience relative to the concept of motion in mind while establishing critical criteria for a comfortable experience of both the ride and play. Full article
(This article belongs to the Section Information Theory and Methodology)
Show Figures

Figure 1

16 pages, 2409 KiB  
Article
A Quaternion Gated Recurrent Unit Neural Network for Sensor Fusion
by Uche Onyekpe, Vasile Palade, Stratis Kanarachos and Stavros-Richard G. Christopoulos
Information 2021, 12(3), 117; https://doi.org/10.3390/info12030117 - 09 Mar 2021
Cited by 13 | Viewed by 3830
Abstract
Recurrent Neural Networks (RNNs) are known for their ability to learn relationships within temporal sequences. Gated Recurrent Unit (GRU) networks have found use in challenging time-dependent applications such as Natural Language Processing (NLP), financial analysis and sensor fusion due to their capability to [...] Read more.
Recurrent Neural Networks (RNNs) are known for their ability to learn relationships within temporal sequences. Gated Recurrent Unit (GRU) networks have found use in challenging time-dependent applications such as Natural Language Processing (NLP), financial analysis and sensor fusion due to their capability to cope with the vanishing gradient problem. GRUs are also known to be more computationally efficient than their variant, the Long Short-Term Memory neural network (LSTM), due to their less complex structure and as such, are more suitable for applications requiring more efficient management of computational resources. Many of such applications require a stronger mapping of their features to further enhance the prediction accuracy. A novel Quaternion Gated Recurrent Unit (QGRU) is proposed in this paper, which leverages the internal and external dependencies within the quaternion algebra to map correlations within and across multidimensional features. The QGRU can be used to efficiently capture the inter- and intra-dependencies within multidimensional features unlike the GRU, which only captures the dependencies within the sequence. Furthermore, the performance of the proposed method is evaluated on a sensor fusion problem involving navigation in Global Navigation Satellite System (GNSS) deprived environments as well as a human activity recognition problem. The results obtained show that the QGRU produces competitive results with almost 3.7 times fewer parameters compared to the GRU.
Full article
(This article belongs to the Special Issue Intelligent Distributed Computing)
Show Figures

Figure 1

19 pages, 6585 KiB  
Article
Modeling of Information Processes in Social Networks
by Sergey Yablochnikov, Mikhail Kuptsov and Maksim Mahiboroda
Information 2021, 12(3), 116; https://doi.org/10.3390/info12030116 - 09 Mar 2021
Cited by 4 | Viewed by 2010
Abstract
In order to model information dissemination in social networks, a special methodology of sampling statistical data formation has been implemented. The probability distribution laws of various characteristics of personal and group accounts in four social networks are investigated. Stochastic aspects of interrelations between [...] Read more.
In order to model information dissemination in social networks, a special methodology of sampling statistical data formation has been implemented. The probability distribution laws of various characteristics of personal and group accounts in four social networks are investigated. Stochastic aspects of interrelations between these indicators were analyzed. The classification of groups of social network users is proposed, and their characteristic features and main empirical regularities of mutual transitions are marked. Regression models of forecasting changes in the number of users of the selected groups have been obtained. Full article
(This article belongs to the Special Issue Digitalized Economy, Society and Information Management)
Show Figures

Figure 1

17 pages, 5468 KiB  
Article
Maximizing Image Information Using Multi-Chimera Transform Applied on Face Biometric Modality
by Ahmad Saeed Mohammad, Dhafer Zaghar and Walaa Khalaf
Information 2021, 12(3), 115; https://doi.org/10.3390/info12030115 - 08 Mar 2021
Cited by 2 | Viewed by 1627
Abstract
With the development of mobile technology, the usage of media data has increased dramatically. Therefore, data reduction represents a research field to maintain valuable information. In this paper, a new scheme called Multi Chimera Transform (MCT) based on data reduction with high information [...] Read more.
With the development of mobile technology, the usage of media data has increased dramatically. Therefore, data reduction represents a research field to maintain valuable information. In this paper, a new scheme called Multi Chimera Transform (MCT) based on data reduction with high information preservation, which aims to improve the reconstructed data by producing three parameters from each 16×16 block of data, is proposed. MCT is a 2D transform that depends on constructing a codebook of 256 picked blocks from some selected images which have a low similarity. The proposed transformation was applied on solid and soft biometric modalities of AR database, giving high information preservation with small resulted file size. The proposed method produced outstanding performance compared with KLT and WT in terms of SSIM and PSNR. The highest SSIM was 0.87 for the proposed scheme MCT of the full image of AR database, while the existed method KLT and WT had 0.81 and 0.68, respectively. In addition, the highest PSNR was 27.23 dB for the proposed scheme on warp facial image of AR database, while the existed methods KLT and WT had 24.70 dB and 21.79 dB, respectively. Full article
(This article belongs to the Section Information Applications)
Show Figures

Figure 1

56 pages, 12679 KiB  
Article
Multimodal Approaches for Indoor Localization for Ambient Assisted Living in Smart Homes
by Nirmalya Thakur and Chia Y. Han
Information 2021, 12(3), 114; https://doi.org/10.3390/info12030114 - 07 Mar 2021
Cited by 45 | Viewed by 5008
Abstract
This work makes multiple scientific contributions to the field of Indoor Localization for Ambient Assisted Living in Smart Homes. First, it presents a Big-Data driven methodology that studies the multimodal components of user interactions and analyzes the data from Bluetooth Low Energy (BLE) [...] Read more.
This work makes multiple scientific contributions to the field of Indoor Localization for Ambient Assisted Living in Smart Homes. First, it presents a Big-Data driven methodology that studies the multimodal components of user interactions and analyzes the data from Bluetooth Low Energy (BLE) beacons and BLE scanners to detect a user’s indoor location in a specific ‘activity-based zone’ during Activities of Daily Living. Second, it introduces a context independent approach that can interpret the accelerometer and gyroscope data from diverse behavioral patterns to detect the ‘zone-based’ indoor location of a user in any Internet of Things (IoT)-based environment. These two approaches achieved performance accuracies of 81.36% and 81.13%, respectively, when tested on a dataset. Third, it presents a methodology to detect the spatial coordinates of a user’s indoor position that outperforms all similar works in this field, as per the associated root mean squared error—one of the performance evaluation metrics in ISO/IEC18305:2016—an international standard for testing Localization and Tracking Systems. Finally, it presents a comprehensive comparative study that includes Random Forest, Artificial Neural Network, Decision Tree, Support Vector Machine, k-NN, Gradient Boosted Trees, Deep Learning, and Linear Regression, to address the challenge of identifying the optimal machine learning approach for Indoor Localization. Full article
(This article belongs to the Special Issue Pervasive Computing in IoT)
Show Figures

Figure 1

26 pages, 6679 KiB  
Article
Dynamic Community Structure in Online Social Groups
by Barbara Guidi and Andrea Michienzi
Information 2021, 12(3), 113; https://doi.org/10.3390/info12030113 - 05 Mar 2021
Cited by 2 | Viewed by 1877
Abstract
One of the main ideas about the Internet is to rethink its services in a user-centric fashion. This fact translates to having human-scale services with devices that will become smarter and make decisions in place of their respective owners. Online Social Networks and, [...] Read more.
One of the main ideas about the Internet is to rethink its services in a user-centric fashion. This fact translates to having human-scale services with devices that will become smarter and make decisions in place of their respective owners. Online Social Networks and, in particular, Online Social Groups, such as Facebook Groups, will be at the epicentre of this revolution because of their great relevance in the current society. Despite the vast number of studies on human behaviour in Online Social Media, the characteristics of Online Social Groups are still unknown. In this paper, we propose a dynamic community detection driven study of the structure of users inside Facebook Groups. The communities are extracted considering the interactions among the members of a group and it aims at searching dense communication groups of users, and the evolution of the communication groups over time, in order to discover social properties of Online Social Groups. The analysis is carried out considering the activity of 17 Facebook Groups, using 8 community detection algorithms and considering 2 possible interaction lifespans. Results show that interaction communities in OSGs are very fragmented but community detection tools are capable of uncovering relevant structures. The study of the community quality gives important insights about the community structure and increasing the interaction lifespan does not necessarily result in more clusterized or bigger communities. Full article
(This article belongs to the Special Issue The Integration of Digital and Social Systems)
Show Figures

Figure 1

15 pages, 458 KiB  
Article
An Identity-Based Cross-Domain Authenticated Asymmetric Group Key Agreement
by Qingnan Chen, Ting Wu, Chengnan Hu, Anbang Chen and Qiuhua Zheng
Information 2021, 12(3), 112; https://doi.org/10.3390/info12030112 - 05 Mar 2021
Cited by 8 | Viewed by 1935
Abstract
Cross-domain authenticated asymmetric group key agreement allows group members in different domains to establish a secure group communication channel and the senders can be anyone. However, the existing schemes do not meet the requirement of batch verification in the group key negotiation phase, [...] Read more.
Cross-domain authenticated asymmetric group key agreement allows group members in different domains to establish a secure group communication channel and the senders can be anyone. However, the existing schemes do not meet the requirement of batch verification in the group key negotiation phase, which makes the schemes have low efficiency. To address this problem, an identity-based cross-domain authenticated asymmetric group key agreement is proposed that supports batch verification. The performance analysis shows that this protocol is highly efficient. Finally, the proposed protocol is proved to be secure under the k-Bilinear Diffie–Hellman Exponent assumption. Full article
(This article belongs to the Section Information and Communications Technology)
Show Figures

Figure 1

23 pages, 2967 KiB  
Review
An IT Service Management Literature Review: Challenges, Benefits, Opportunities and Implementation Practices
by João Serrano, João Faustino, Daniel Adriano, Rúben Pereira and Miguel Mira da Silva
Information 2021, 12(3), 111; https://doi.org/10.3390/info12030111 - 05 Mar 2021
Cited by 9 | Viewed by 8077
Abstract
Information technology (IT) service management is considered a collection of frameworks that support organizations managing services. The implementation of these kinds of frameworks is constantly increasing in the IT service provider domain. The main objective is to define and manage IT services through [...] Read more.
Information technology (IT) service management is considered a collection of frameworks that support organizations managing services. The implementation of these kinds of frameworks is constantly increasing in the IT service provider domain. The main objective is to define and manage IT services through its life cycle. However, from observing the literature, scarcely any research exists describing the main concepts of ITSM. Many organizations still struggle in several contexts in this domain, mainly during implementation. This research aims to develop a reference study detailing the main concepts related with ITSM. Thus, a systematic literature review is performed. In total, 47 articles were selected from top journals and conferences. The benefits, challenges, opportunities, and practices for ITSM implementation were extracted, critically analysed, and then discussed. Full article
(This article belongs to the Section Review)
Show Figures

Figure 1

14 pages, 1747 KiB  
Article
BGP Neighbor Trust Establishment Mechanism Based on the Bargaining Game
by Peipei Li, Bin Lu and Daofeng Li
Information 2021, 12(3), 110; https://doi.org/10.3390/info12030110 - 04 Mar 2021
Cited by 2 | Viewed by 2016
Abstract
The Border Gateway Protocol (BGP) is the standard inter-domain route protocol on the Internet. Autonomous System (AS) traffic is forwarded by the BGP neighbors. In the route selection, if there are malicious or inactive neighbors, it will affect the network’s performance or even [...] Read more.
The Border Gateway Protocol (BGP) is the standard inter-domain route protocol on the Internet. Autonomous System (AS) traffic is forwarded by the BGP neighbors. In the route selection, if there are malicious or inactive neighbors, it will affect the network’s performance or even cause the network to crash. Therefore, choosing trusted and safe neighbors is an essential part of BGP security research. In response to such a problem, in this paper we propose a BGP Neighbor Trust Establishment Mechanism based on the Bargaining Game (BNTE-BG). By combining service quality attributes such as bandwidth, packet loss rate, jitter, delay, and price with bargaining game theory, it allows the AS to select trusted neighbors which satisfy the Quality of Service independently. When the trusted neighbors are forwarding data, we draw on the gray correlation algorithm to calculate neighbors’ behavioral trust and detect malicious or inactive BGP neighbors. Full article
(This article belongs to the Section Information and Communications Technology)
Show Figures

Figure 1

25 pages, 9816 KiB  
Article
Analysis and Prediction of COVID-19 Using SIR, SEIQR, and Machine Learning Models: Australia, Italy, and UK Cases
by Iman Rahimi, Amir H. Gandomi, Panagiotis G. Asteris and Fang Chen
Information 2021, 12(3), 109; https://doi.org/10.3390/info12030109 - 03 Mar 2021
Cited by 60 | Viewed by 6053
Abstract
The novel coronavirus disease, also known as COVID-19, is a disease outbreak that was first identified in Wuhan, a Central Chinese city. In this report, a short analysis focusing on Australia, Italy, and UK is conducted. The analysis includes confirmed and recovered cases [...] Read more.
The novel coronavirus disease, also known as COVID-19, is a disease outbreak that was first identified in Wuhan, a Central Chinese city. In this report, a short analysis focusing on Australia, Italy, and UK is conducted. The analysis includes confirmed and recovered cases and deaths, the growth rate in Australia compared with that in Italy and UK, and the trend of the disease in different Australian regions. Mathematical approaches based on susceptible, infected, and recovered (SIR) cases and susceptible, exposed, infected, quarantined, and recovered (SEIQR) cases models are proposed to predict epidemiology in the above-mentioned countries. Since the performance of the classic forms of SIR and SEIQR depends on parameter settings, some optimization algorithms, namely Broyden–Fletcher–Goldfarb–Shanno (BFGS), conjugate gradients (CG), limited memory bound constrained BFGS (L-BFGS-B), and Nelder–Mead, are proposed to optimize the parameters and the predictive capabilities of the SIR and SEIQR models. The results of the optimized SIR and SEIQR models were compared with those of two well-known machine learning algorithms, i.e., the Prophet algorithm and logistic function. The results demonstrate the different behaviors of these algorithms in different countries as well as the better performance of the improved SIR and SEIQR models. Moreover, the Prophet algorithm was found to provide better prediction performance than the logistic function, as well as better prediction performance for Italy and UK cases than for Australian cases. Therefore, it seems that the Prophet algorithm is suitable for data with an increasing trend in the context of a pandemic. Optimization of SIR and SEIQR model parameters yielded a significant improvement in the prediction accuracy of the models. Despite the availability of several algorithms for trend predictions in this pandemic, there is no single algorithm that would be optimal for all cases. Full article
Show Figures

Figure 1

Previous Issue
Next Issue
Back to TopTop