Next Issue
Volume 12, July
Previous Issue
Volume 12, May
 
 

Information, Volume 12, Issue 6 (June 2021) – 40 articles

Cover Story (view full-size image): Public websites offer information on a variety of topics and services and are accessed by users of varying skills to browse different types of electronic document repositories. However, the complex website structure and diversity of web browsing behavior create a challenging task for click prediction. This paper presents the results of a novel reinforcement learning approach to model user browsing patterns in a hierarchically ordered municipal website. We study how accurate the predictor for browsing history is when the target pages are not the immediate next pages pointed to by hyperlinks but instead appear a number of levels down the hierarchy. We compare the performance of traditional types of baseline classifiers against our reinforcement learning-based training algorithm. View this paper
  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Reader to open them.
Order results
Result details
Section
Select all
Export citation of selected articles as:
28 pages, 890 KiB  
Article
Obstacles to Applying Electronic Exams amidst the COVID-19 Pandemic: An Exploratory Study in the Palestinian Universities in Gaza
by Raed Bashitialshaaer, Mohammed Alhendawi and Helen Avery
Information 2021, 12(6), 256; https://doi.org/10.3390/info12060256 - 20 Jun 2021
Cited by 11 | Viewed by 3940
Abstract
In the context of the COVID-19 pandemic, we aim to identify and understand the obstacles and barriers in applying electronic exams successfully in the process of distance education. We followed an exploratory descriptive approach through a questionnaire (one general, open question) with a [...] Read more.
In the context of the COVID-19 pandemic, we aim to identify and understand the obstacles and barriers in applying electronic exams successfully in the process of distance education. We followed an exploratory descriptive approach through a questionnaire (one general, open question) with a sample of university teachers and students in four of the largest universities Palestinian in Gaza. A total of 152 were returned from 300 distributed questionnaires. The results indicate that the university teachers and students faced 13 obstacles, of which 9 were shown to be shared between teachers and students, with a significant agreement in the regression analysis. Several of the obstacles perceived by respondents are in line with the literature and can be addressed by improved examination design, training, and preparation, or use of suitable software. Other obstacles related to infrastructure issues, leading to frequent power outages and unreliable internet access. Difficult living conditions in students’ homes and disparities in access to suitable devices or the internet make social equity in connection with high-stakes examinations a major concern. Some recommendations and suggestions are listed at the end of this study, considering local conditions in the Gaza governorates. Full article
Show Figures

Figure 1

15 pages, 1037 KiB  
Article
Modeling Data Flows with Network Calculus in Cyber-Physical Systems: Enabling Feature Analysis for Anomaly Detection Applications
by Nicholas Jacobs, Shamina Hossain-McKenzie and Adam Summers
Information 2021, 12(6), 255; https://doi.org/10.3390/info12060255 - 19 Jun 2021
Cited by 4 | Viewed by 2105
Abstract
The electric grid is becoming increasingly cyber-physical with the addition of smart technologies, new communication interfaces, and automated grid-support functions. Because of this, it is no longer sufficient to only study the physical system dynamics, but the cyber system must also be monitored [...] Read more.
The electric grid is becoming increasingly cyber-physical with the addition of smart technologies, new communication interfaces, and automated grid-support functions. Because of this, it is no longer sufficient to only study the physical system dynamics, but the cyber system must also be monitored as well to examine cyber-physical interactions and effects on the overall system. To address this gap for both operational and security needs, cyber-physical situational awareness is needed to monitor the system to detect any faults or malicious activity. Techniques and models to understand the physical system (the power system operation) exist, but methods to study the cyber system are needed, which can assist in understanding how the network traffic and changes to network conditions affect applications such as data analysis, intrusion detection systems (IDS), and anomaly detection. In this paper, we examine and develop models of data flows in communication networks of cyber-physical systems (CPSs) and explore how network calculus can be utilized to develop those models for CPSs, with a focus on anomaly and intrusion detection. This provides a foundation for methods to examine how changes to behavior in the CPS can be modeled and for investigating cyber effects in CPSs in anomaly detection applications. Full article
(This article belongs to the Special Issue Cyber-Physical Systems Security and Resilience)
Show Figures

Figure 1

12 pages, 476 KiB  
Article
On the Distributed Construction of Stable Networks in Polylogarithmic Parallel Time
by Matthew Connor, Othon Michail and Paul Spirakis
Information 2021, 12(6), 254; https://doi.org/10.3390/info12060254 - 19 Jun 2021
Cited by 2 | Viewed by 1612
Abstract
We study the class of networks, which can be created in polylogarithmic parallel time by network constructors: groups of anonymous agents that interact randomly under a uniform random scheduler with the ability to form connections between each other. Starting from an empty [...] Read more.
We study the class of networks, which can be created in polylogarithmic parallel time by network constructors: groups of anonymous agents that interact randomly under a uniform random scheduler with the ability to form connections between each other. Starting from an empty network, the goal is to construct a stable network that belongs to a given family. We prove that the class of trees where each node has any k2 children can be constructed in O(logn) parallel time with high probability. We show that constructing networks that are k-regular is Ω(n) time, but a minimal relaxation to (l,k)-regular networks, where l=k1, can be constructed in polylogarithmic parallel time for any fixed k, where k>2. We further demonstrate that when the finite-state assumption is relaxed and k is allowed to grow with n, then k=loglogn acts as a threshold above which network construction is, again, polynomial time. We use this to provide a partial characterisation of the class of polylogarithmic time network constructors. Full article
(This article belongs to the Special Issue Distributed Systems and Mobile Computing)
Show Figures

Figure 1

14 pages, 2073 KiB  
Article
A Secure Steganographic Channel Using DNA Sequence Data and a Bio-Inspired XOR Cipher
by Amal Khalifa
Information 2021, 12(6), 253; https://doi.org/10.3390/info12060253 - 18 Jun 2021
Cited by 4 | Viewed by 3123
Abstract
Secure communication is becoming an urgent need in a digital world where tera bytes of sensitive information are sent back and forth over public networks. In this paper, we combine the power of both encryption and Steganography to build a secure channel of [...] Read more.
Secure communication is becoming an urgent need in a digital world where tera bytes of sensitive information are sent back and forth over public networks. In this paper, we combine the power of both encryption and Steganography to build a secure channel of communication between two parties. The proposed method uses DNA sequence data as a cover to hide the secret message. The hiding process is performed in phases that start with a complementary substitution operation followed by a random insertion process. Furthermore, and before the hiding process takes place, the message is encrypted to secure its contents. Here, we propose an XOR cipher that is also based on how DNA data is digitally represented and stored. A fixed-size header is embedded right before the message itself to facilitate the blind extraction process. The experimental results showed an outstanding performance of the proposed technique, in comparison with other methods, in terms of capacity, security, as well as blind extraction. Full article
Show Figures

Figure 1

14 pages, 1558 KiB  
Article
Adaptive Multi-Scale Wavelet Neural Network for Time Series Classification
by Kewei Ouyang, Yi Hou, Shilin Zhou and Ye Zhang
Information 2021, 12(6), 252; https://doi.org/10.3390/info12060252 - 17 Jun 2021
Cited by 4 | Viewed by 2072
Abstract
Wavelet transform is a well-known multi-resolution tool to analyze the time series in the time-frequency domain. Wavelet basis is diverse but predefined by manual without taking the data into the consideration. Hence, it is a great challenge to select an appropriate wavelet basis [...] Read more.
Wavelet transform is a well-known multi-resolution tool to analyze the time series in the time-frequency domain. Wavelet basis is diverse but predefined by manual without taking the data into the consideration. Hence, it is a great challenge to select an appropriate wavelet basis to separate the low and high frequency components for the task on the hand. Inspired by the lifting scheme in the second-generation wavelet, the updater and predictor are learned directly from the time series to separate the low and high frequency components of the time series. An adaptive multi-scale wavelet neural network (AMSW-NN) is proposed for time series classification in this paper. First, candidate frequency decompositions are obtained by a multi-scale convolutional neural network in conjunction with a depthwise convolutional neural network. Then, a selector is used to choose the optimal frequency decomposition from the candidates. At last, the optimal frequency decomposition is fed to a classification network to predict the label. A comprehensive experiment is performed on the UCR archive. The results demonstrate that, compared with the classical wavelet transform, AMSW-NN could improve the performance based on different classification networks. Full article
(This article belongs to the Section Artificial Intelligence)
Show Figures

Figure 1

27 pages, 12544 KiB  
Article
User Interface Pattern for AR in Industrial Applications
by Regina Koreng and Heidi Krömker
Information 2021, 12(6), 251; https://doi.org/10.3390/info12060251 - 16 Jun 2021
Cited by 3 | Viewed by 3125
Abstract
The background of the paper is that there are currently no specifications or guidelines for the design of a user interface for an augmented reality system in an industrial context. In this area, special requirements apply for the perception and recognition of content, [...] Read more.
The background of the paper is that there are currently no specifications or guidelines for the design of a user interface for an augmented reality system in an industrial context. In this area, special requirements apply for the perception and recognition of content, which are given by the framework conditions of the industrial environment, the human–technology interaction, and the work task. This paper addresses the software-technical design of augmented reality surfaces in the industrial environment. The aim is to give first design examples for software tasks by means of sample solutions. For a user-oriented implementation, the methods of personas and an empirical investigation were used. Personas are a stereotypical representation of end users that reflect their characteristics and requirements. For the subsequent development of the pattern catalog, different prototypes with layout and interaction variants were tested in an empirical study with 50 participants. By observing the current realizations, these can be examined more closely in terms of their specific use in an industrial environment. The result is a pattern catalog with 26 solutions for layout and interaction variants. For the layout variants, no direct favorite of the testers could be ascertained; the existing solutions already offer a wide spectrum, which are chosen according to personal preferences. For interaction, on the other hand, it is important to enable fast input. During the study, gesture control revealed weaknesses in this regard. This can be supportive in the development of an industrial augmented reality system regarding a user-oriented representation of the interface. Full article
(This article belongs to the Collection Augmented Reality Technologies, Systems and Applications)
Show Figures

Figure 1

18 pages, 1139 KiB  
Article
Stock Trend Prediction Using Deep Learning Approach on Technical Indicator and Industrial Specific Information
by Kittisak Prachyachuwong and Peerapon Vateekul
Information 2021, 12(6), 250; https://doi.org/10.3390/info12060250 - 15 Jun 2021
Cited by 21 | Viewed by 5945
Abstract
A stock trend prediction has been in the spotlight from the past to the present. Fortunately, there is an enormous amount of information available nowadays. There were prior attempts that have tried to forecast the trend using textual information; however, it can be [...] Read more.
A stock trend prediction has been in the spotlight from the past to the present. Fortunately, there is an enormous amount of information available nowadays. There were prior attempts that have tried to forecast the trend using textual information; however, it can be further improved since they relied on fixed word embedding, and it depends on the sentiment of the whole market. In this paper, we propose a deep learning model to predict the Thailand Futures Exchange (TFEX) with the ability to analyze both numerical and textual information. We have used Thai economic news headlines from various online sources. To obtain better news sentiment, we have divided the headlines into industry-specific indexes (also called “sectors”) to reflect the movement of securities of the same fundamental. The proposed method consists of Long Short-Term Memory Network (LSTM) and Bidirectional Encoder Representations from Transformers (BERT) architectures to predict daily stock market activity. We have evaluated model performance by considering predictive accuracy and the returns obtained from the simulation of buying and selling. The experimental results demonstrate that enhancing both numerical and textual information of each sector can improve prediction performance and outperform all baselines. Full article
Show Figures

Figure 1

16 pages, 4441 KiB  
Article
An Imbalanced Image Classification Method for the Cell Cycle Phase
by Xin Jin, Yuanwen Zou and Zhongbing Huang
Information 2021, 12(6), 249; https://doi.org/10.3390/info12060249 - 15 Jun 2021
Cited by 9 | Viewed by 2626
Abstract
The cell cycle is an important process in cellular life. In recent years, some image processing methods have been developed to determine the cell cycle stages of individual cells. However, in most of these methods, cells have to be segmented, and their features [...] Read more.
The cell cycle is an important process in cellular life. In recent years, some image processing methods have been developed to determine the cell cycle stages of individual cells. However, in most of these methods, cells have to be segmented, and their features need to be extracted. During feature extraction, some important information may be lost, resulting in lower classification accuracy. Thus, we used a deep learning method to retain all cell features. In order to solve the problems surrounding insufficient numbers of original images and the imbalanced distribution of original images, we used the Wasserstein generative adversarial network-gradient penalty (WGAN-GP) for data augmentation. At the same time, a residual network (ResNet) was used for image classification. ResNet is one of the most used deep learning classification networks. The classification accuracy of cell cycle images was achieved more effectively with our method, reaching 83.88%. Compared with an accuracy of 79.40% in previous experiments, our accuracy increased by 4.48%. Another dataset was used to verify the effect of our model and, compared with the accuracy from previous results, our accuracy increased by 12.52%. The results showed that our new cell cycle image classification system based on WGAN-GP and ResNet is useful for the classification of imbalanced images. Moreover, our method could potentially solve the low classification accuracy in biomedical images caused by insufficient numbers of original images and the imbalanced distribution of original images. Full article
Show Figures

Figure 1

18 pages, 578 KiB  
Article
Automated Classification of Fake News Spreaders to Break the Misinformation Chain
by Simone Leonardi, Giuseppe Rizzo and Maurizio Morisio
Information 2021, 12(6), 248; https://doi.org/10.3390/info12060248 - 15 Jun 2021
Cited by 18 | Viewed by 3848
Abstract
In social media, users are spreading misinformation easily and without fact checking. In principle, they do not have a malicious intent, but their sharing leads to a socially dangerous diffusion mechanism. The motivations behind this behavior have been linked to a wide variety [...] Read more.
In social media, users are spreading misinformation easily and without fact checking. In principle, they do not have a malicious intent, but their sharing leads to a socially dangerous diffusion mechanism. The motivations behind this behavior have been linked to a wide variety of social and personal outcomes, but these users are not easily identified. The existing solutions show how the analysis of linguistic signals in social media posts combined with the exploration of network topologies are effective in this field. These applications have some limitations such as focusing solely on the fake news shared and not understanding the typology of the user spreading them. In this paper, we propose a computational approach to extract features from the social media posts of these users to recognize who is a fake news spreader for a given topic. Thanks to the CoAID dataset, we start the analysis with 300 K users engaged on an online micro-blogging platform; then, we enriched the dataset by extending it to a collection of more than 1 M share actions and their associated posts on the platform. The proposed approach processes a batch of Twitter posts authored by users of the CoAID dataset and turns them into a high-dimensional matrix of features, which are then exploited by a deep neural network architecture based on transformers to perform user classification. We prove the effectiveness of our work by comparing the precision, recall, and f1 score of our model with different configurations and with a baseline classifier. We obtained an f1 score of 0.8076, obtaining an improvement from the state-of-the-art by 4%. Full article
(This article belongs to the Special Issue News Research in Social Networks and Social Media)
Show Figures

Figure 1

11 pages, 3006 KiB  
Article
CFM-RFM: A Cascading Failure Model for Inter-Domain Routing Systems with the Recovery Feedback Mechanism
by Wendian Zhao, Yongjie Wang, Xinli Xiong and Yang Li
Information 2021, 12(6), 247; https://doi.org/10.3390/info12060247 - 14 Jun 2021
Cited by 7 | Viewed by 2544
Abstract
With the increase and diversification of network users, the scale of the inter-domain routing system is becoming larger and larger. Cascading failure analysis and modeling are of great significance to improve the security of inter-domain routing networks. To solve the problem that the [...] Read more.
With the increase and diversification of network users, the scale of the inter-domain routing system is becoming larger and larger. Cascading failure analysis and modeling are of great significance to improve the security of inter-domain routing networks. To solve the problem that the propagation principle of cascading failure does not conform to reality, a Cascading Failure Model for inter-domain routing systems with the Recovery Feedback Mechanism (CFM-RFM) is proposed in this paper. CFM-RFM comprehensively considers the main factors that cause cascading failure. Based on two types of update message propagation mechanism and traffic redistribution, it simulates the cascading failure process. We found that under the action of the recovery feedback mechanism, the cascading failure process was accelerated, and the network did not quickly return to normal, but was rather quickly and extensively paralyzed. The average attack cost can be reduced by 38.1% when the network suffers the same damage. Full article
(This article belongs to the Special Issue Secure and Trustworthy Cyber–Physical Systems)
Show Figures

Figure 1

13 pages, 3453 KiB  
Article
Empirical Analysis of IPv4 and IPv6 Networks through Dual-Stack Sites
by Kwun-Hung Li and Kin-Yeung Wong
Information 2021, 12(6), 246; https://doi.org/10.3390/info12060246 - 14 Jun 2021
Cited by 9 | Viewed by 5194
Abstract
IPv6 is the most recent version of the Internet Protocol (IP), which can solve the problem of IPv4 address exhaustion and allow the growth of the Internet (particularly in the era of the Internet of Things). IPv6 networks have been deployed for more [...] Read more.
IPv6 is the most recent version of the Internet Protocol (IP), which can solve the problem of IPv4 address exhaustion and allow the growth of the Internet (particularly in the era of the Internet of Things). IPv6 networks have been deployed for more than a decade, and the deployment is still growing every year. This empirical study was conducted from the perspective of end users to evaluate IPv6 and IPv4 performance by sending probing traffic to 1792 dual-stack sites around the world. Connectivity, packet loss, hop count, round-trip time (RTT), and throughput were used as performance metrics. The results show that, compared with IPv4, IPv6 has better connectivity, lower packet loss, and similar hop count. However, compared with IPv4, it has higher latency and lower throughput. We compared our results with previous studies conducted in 2004, 2007, and 2014 to investigate the improvement of IPv6 networks. The results of the past 16 years have shown that the connectivity of IPv6 has increased by 1–4%, and the IPv6 RTT (194.85 ms) has been greatly reduced, but it is still longer than IPv4 (163.72 ms). The throughput of IPv6 is still lower than that of IPv4. Full article
(This article belongs to the Special Issue Wireless IoT Network Protocols)
Show Figures

Figure 1

13 pages, 16074 KiB  
Article
Deep Learning Models for Colorectal Polyps
by Ornela Bardhi, Daniel Sierra-Sosa, Begonya Garcia-Zapirain and Luis Bujanda
Information 2021, 12(6), 245; https://doi.org/10.3390/info12060245 - 10 Jun 2021
Cited by 7 | Viewed by 4465
Abstract
Colorectal cancer is one of the main causes of cancer incident cases and cancer deaths worldwide. Undetected colon polyps, be them benign or malignant, lead to late diagnosis of colorectal cancer. Computer aided devices have helped to decrease the polyp miss rate. The [...] Read more.
Colorectal cancer is one of the main causes of cancer incident cases and cancer deaths worldwide. Undetected colon polyps, be them benign or malignant, lead to late diagnosis of colorectal cancer. Computer aided devices have helped to decrease the polyp miss rate. The application of deep learning algorithms and techniques has escalated during this last decade. Many scientific studies are published to detect, localize, and classify colon polyps. We present here a brief review of the latest published studies. We compare the accuracy of these studies with our results obtained from training and testing three independent datasets using a convolutional neural network and autoencoder model. A train, validate and test split was performed for each dataset, 75%, 15%, and 15%, respectively. An accuracy of 0.937 was achieved for CVC-ColonDB, 0.951 for CVC-ClinicDB, and 0.967 for ETIS-LaribPolypDB. Our results suggest slight improvements compared to the algorithms used to date. Full article
(This article belongs to the Special Issue Advances in AI for Health and Medical Applications)
Show Figures

Figure 1

20 pages, 5292 KiB  
Article
Developing a Virtual Museum: Experience from the Design and Creation Process
by Felipe Besoain, Liza Jego and Ismael Gallardo
Information 2021, 12(6), 244; https://doi.org/10.3390/info12060244 - 10 Jun 2021
Cited by 17 | Viewed by 5881
Abstract
Virtual reality technology has grown significantly in recent years. The arrival of Head Mounted Displays (HDM) on the market for end-users has positioned these technologies as a new channel to promote new simulated or contextualized experiences. We have used the design and creation [...] Read more.
Virtual reality technology has grown significantly in recent years. The arrival of Head Mounted Displays (HDM) on the market for end-users has positioned these technologies as a new channel to promote new simulated or contextualized experiences. We have used the design and creation strategy to develop a virtual reality experience for the Oculus GO and Quest HDM. We digitized 30 pieces from nine local museums to provide an experience guided by a character that represents the main artisan work of the local region. A usability test was performed, showing that participants felt a high degree of immersion and realism. They were able to complete the assigned tasks, and results suggest that the software meets the main objective. Furthermore, the creation of this virtual reality (VR) experience has shown how important it is to make users a part of the creation process, as well as to develop a process to make the software useful to them and other users. Some recommendations are made based on the experience of the development, and comments are given on each step of the design and creation strategy. Full article
(This article belongs to the Special Issue Virtual Reality Technologies and Applications for Cultural Heritage)
Show Figures

Figure 1

16 pages, 1394 KiB  
Article
Establishment of a Key Hidden Danger Factor System for Electric Power Personal Casualty Accidents Based on Text Mining
by Dan Lu, Changqing Xu, Chuanmin Mi, Yijing Wang, Xiangmin Xu and Chufan Zhao
Information 2021, 12(6), 243; https://doi.org/10.3390/info12060243 - 10 Jun 2021
Cited by 4 | Viewed by 2680
Abstract
Based on actual safety management difficulties and needs, this paper aims to screen and extract the key accident potential factors of personal injuries and deaths within the electric power industry to provide a reference for electric power companies’ accident prevention effort. First, this [...] Read more.
Based on actual safety management difficulties and needs, this paper aims to screen and extract the key accident potential factors of personal injuries and deaths within the electric power industry to provide a reference for electric power companies’ accident prevention effort. First, this document sorts out and analyzes all of the causes and influencing elements that may lead to the occurrence of electric personal injuries and deaths, based on which rough accident potential factors are initially identified and combined with the definition of accident potentials. Second, this paper mines and analyzes relevant accident report texts using text-mining technologies such as term count, word cloud, and term frequency–inverse document frequency (TF-IDF), and thus a system of key accident potential factors for personal injuries and deaths within the electric power industry, including three key factors (human, material, and management), is finally constructed. Workers’ habitual violation behavior, in particular, has a larger risk than other key accident potential components, implying that additional steps should be made to eradicate this type of critical accident potential in time. Full article
Show Figures

Figure 1

16 pages, 693 KiB  
Article
MNCF: Prediction Method for Reliable Blockchain Services under a BaaS Environment
by Jianlong Xu, Zicong Zhuang, Zhiyu Xia and Yuhui Li
Information 2021, 12(6), 242; https://doi.org/10.3390/info12060242 - 10 Jun 2021
Cited by 2 | Viewed by 2404
Abstract
Blockchain is an innovative distributed ledger technology that is widely used to build next-generation applications without the support of a trusted third party. With the ceaseless evolution of the service-oriented computing (SOC) paradigm, Blockchain-as-a-Service (BaaS) has emerged, which facilitates development of blockchain-based applications. [...] Read more.
Blockchain is an innovative distributed ledger technology that is widely used to build next-generation applications without the support of a trusted third party. With the ceaseless evolution of the service-oriented computing (SOC) paradigm, Blockchain-as-a-Service (BaaS) has emerged, which facilitates development of blockchain-based applications. To develop a high-quality blockchain-based system, users must select highly reliable blockchain services (peers) that offer excellent quality-of-service (QoS). Since the vast number of blockchain services leading to sparse QoS data, selecting the optimal personalized services is challenging. Hence, we improve neural collaborative filtering and propose a QoS-based blockchain service reliability prediction algorithm under BaaS, named modified neural collaborative filtering (MNCF). In this model, we combine a neural network with matrix factorization to perform collaborative filtering for the latent feature vectors of users. Furthermore, multi-task learning for sharing different parameters is introduced to improve the performance of the model. Experiments based on a large-scale real-world dataset validate its superior performance compared to baselines. Full article
(This article belongs to the Special Issue Recent Advances in IoT and Cyber/Physical Security)
Show Figures

Figure 1

17 pages, 1000 KiB  
Article
Impulse Buying Behaviors in Live Streaming Commerce Based on the Stimulus-Organism-Response Framework
by Chao-Hsing Lee and Chien-Wen Chen
Information 2021, 12(6), 241; https://doi.org/10.3390/info12060241 - 08 Jun 2021
Cited by 76 | Viewed by 30510
Abstract
Live streaming commerce, which evolved from social commerce, has continued to flourish rapidly over the past few years in China. It is a new business model that allows vendors to directly face and interact with consumers. This study focuses on the impulsive buying [...] Read more.
Live streaming commerce, which evolved from social commerce, has continued to flourish rapidly over the past few years in China. It is a new business model that allows vendors to directly face and interact with consumers. This study focuses on the impulsive buying behavior on consumers in live streaming commerce. We proposed a research model based on the stimulus organism response (S-O-R) framework to explore the reaction and behavior of consumers after certain stimuli factors. A total of 433 valid sample questionnaires with the shopping experience in the live streaming platform were taken. This research adopted PLS-SEM statistical analysis as an empirical research evaluation. After the empirical investigation, we found that perceived enjoyment positively affects the urge to buy impulsively. Perceived usefulness positively affects perceived enjoyment. However, perceived usefulness does not positively affect the urge to buy impulsively. Attractiveness and expertise positively affect perceived enjoyment. Product usefulness and purchase convenience positively affect perceived usefulness. We found that consumers in live streaming commerce are easier to have impulsive buying through the presentation and urging of the live streamer in a short period. In this paper, we build a model for impulsive buying in live streaming commerce. We verify this model under the Chinese context. The findings of this paper provide concrete suggestions to vendors. Full article
Show Figures

Figure 1

29 pages, 7937 KiB  
Article
How to Create a Software Ecosystem? A Partnership Meta-Model and Strategic Patterns
by Ítalo Belo and Carina Alves
Information 2021, 12(6), 240; https://doi.org/10.3390/info12060240 - 03 Jun 2021
Cited by 6 | Viewed by 4394
Abstract
Large keystone organizations use partnership models to manage their software ecosystem partners. Although several partnership models have been developed by platform owners, smaller companies willing to create a new ecosystem may experience difficulties to define the appropriate features of partnership models when switching [...] Read more.
Large keystone organizations use partnership models to manage their software ecosystem partners. Although several partnership models have been developed by platform owners, smaller companies willing to create a new ecosystem may experience difficulties to define the appropriate features of partnership models when switching from an independent software product to an ecosystem. This study proposes a partnership meta-model and four strategic patterns to operationalize it. We adopted the Design Science Research (DSR) method. The partnership meta-model was built in the first cycle of DSR, using a Systematic Mapping Study, and validated through case studies of SAP, Eclipse, and Microsoft Azure ecosystems. In the second cycle of DSR, the strategic patterns were defined through a Multivocal Literature Review and validated by using interviews with professionals. The meta-model presents the key characteristics to define partnership models for emerging software ecosystems. The strategic patterns aim to operationalize the meta-model and, consequently, assist the keystone in defining the features that the partnership model will have and select suitable strategies. The meta-model and the strategic patterns help managers creating and evolving software ecosystems from a software product considering the impact of that transition on the partnership model. Full article
(This article belongs to the Section Information Systems)
Show Figures

Figure 1

21 pages, 36591 KiB  
Article
Quantitative and Qualitative Comparison of 2D and 3D Projection Techniques for High-Dimensional Data
by Zonglin Tian, Xiaorui Zhai, Gijs van Steenpaal, Lingyun Yu, Evanthia Dimara, Mateus Espadoto and Alexandru Telea
Information 2021, 12(6), 239; https://doi.org/10.3390/info12060239 - 03 Jun 2021
Cited by 4 | Viewed by 4124
Abstract
Projections are well-known techniques that help the visual exploration of high-dimensional data by creating depictions thereof in a low-dimensional space. While projections that target the 2D space have been studied in detail both quantitatively and qualitatively, 3D projections are far less well understood, [...] Read more.
Projections are well-known techniques that help the visual exploration of high-dimensional data by creating depictions thereof in a low-dimensional space. While projections that target the 2D space have been studied in detail both quantitatively and qualitatively, 3D projections are far less well understood, with authors arguing both for and against the added-value of a third visual dimension. We fill this gap by first presenting a quantitative study that compares 2D and 3D projections along a rich selection of datasets, projection techniques, and quality metrics. To refine these insights, we conduct a qualitative study that compares the preference of users in exploring high-dimensional data using 2D vs. 3D projections, both without and with visual explanations. Our quantitative and qualitative findings indicate that, in general, 3D projections bring only limited added-value atop of the one provided by their 2D counterparts. However, certain 3D projection techniques can show more structure than their 2D counterparts, and can stimulate users to further exploration. All our datasets, source code, and measurements are made public for ease of replication and extension. Full article
(This article belongs to the Special Issue Trends and Opportunities in Visualization and Visual Analytics)
Show Figures

Figure 1

17 pages, 8782 KiB  
Article
Autoencoder Based Analysis of RF Parameters in the Fermilab Low Energy Linac
by Jonathan P. Edelen and Christopher C. Hall
Information 2021, 12(6), 238; https://doi.org/10.3390/info12060238 - 31 May 2021
Cited by 3 | Viewed by 2819
Abstract
Machine learning (ML) has the potential for significant impact on the modeling, operation, and control of particle accelerators due to its ability to model nonlinear behavior, interpolate on complicated surfaces, and adapt to system changes over time. Anomaly detection in particular has been [...] Read more.
Machine learning (ML) has the potential for significant impact on the modeling, operation, and control of particle accelerators due to its ability to model nonlinear behavior, interpolate on complicated surfaces, and adapt to system changes over time. Anomaly detection in particular has been highlighted as an area where ML can significantly impact the operation of accelerators. These algorithms work by identifying subtle behaviors of key variables prior to negative events. Efforts to apply ML to anomaly detection have largely focused on subsystems such as RF cavities, superconducting magnets, and losses in rings. However, dedicated efforts to understand how to apply ML for anomaly detection in linear accelerators have been limited. In this paper the use of autoencoders is explored to identify anomalous behavior in measured data from the Fermilab low-energy linear accelerator. Full article
(This article belongs to the Special Issue Machine Learning and Accelerator Technology)
Show Figures

Figure 1

17 pages, 876 KiB  
Article
IT Process Alignment in Business Strategy: Examining the Role of Transactional Leadership and Organization Culture
by Yongming Wang, Muhammad Toseef and Yingmei Gong
Information 2021, 12(6), 237; https://doi.org/10.3390/info12060237 - 31 May 2021
Cited by 4 | Viewed by 4881
Abstract
Information technology (IT) is a competitive path and offers the entrepreneurial opportunity of accumulating business knowledge in capturing consumer behavior. This study employed a conceptual framework to investigate the information processing facet of IT–business alignment under the impact mechanism of transactional leadership in [...] Read more.
Information technology (IT) is a competitive path and offers the entrepreneurial opportunity of accumulating business knowledge in capturing consumer behavior. This study employed a conceptual framework to investigate the information processing facet of IT–business alignment under the impact mechanism of transactional leadership in the manufacturing sector of Yunnan Province, China. Specifically, organization culture is taken as a moderating factor extracted from situational theory and has been highlighted as important in previous organizational research. This study aimed at investigating the impact of transactional leadership on IT–business process alignment and studying the moderating effect of organizational culture on the relationship between transactional leadership and IT–business process alignment. The empirical findings reveal that contingent reward and management by exception behaviors of entrepreneurs are significant drivers of IT–business process alignment. Furthermore, market culture had a moderating effect on the relationship between entrepreneurs’ transactional behaviors and IT–business process alignment. Similarly, hierarchy culture exerts a moderating effect on the path between contingent rewarding behavior and IT–business process alignment. Here, it exerts an insignificant moderating effect on the management by exception behavior and IT–business process alignment path. The study findings mainly reveal the association of transactional leadership with IT–business process alignment, along with the moderating role of organizational culture. This study contributes to the literature on business knowledge by showcasing empirical evidence—how information processing aids entrepreneurial behavior to capture market opportunities and consumer behavior. Full article
(This article belongs to the Special Issue Enterprise Architecture in the Digital Era)
Show Figures

Figure 1

30 pages, 10084 KiB  
Article
Integrating Land-Cover Products Based on Ontologies and Local Accuracy
by Ling Zhu, Guangshuai Jin and Dejun Gao
Information 2021, 12(6), 236; https://doi.org/10.3390/info12060236 - 31 May 2021
Cited by 6 | Viewed by 2216
Abstract
Freely available satellite imagery improves the research and production of land-cover products at the global scale or over large areas. The integration of land-cover products is a process of combining the advantages or characteristics of several products to generate new products and meet [...] Read more.
Freely available satellite imagery improves the research and production of land-cover products at the global scale or over large areas. The integration of land-cover products is a process of combining the advantages or characteristics of several products to generate new products and meet the demand for special needs. This study presents an ontology-based semantic mapping approach for integration land-cover products using hybrid ontology with EAGLE (EIONET Action Group on Land monitoring in Europe) matrix elements as the shared vocabulary, linking and comparing concepts from multiple local ontologies. Ontology mapping based on term, attribute and instance is combined to obtain the semantic similarity between heterogeneous land-cover products and realise the integration on a schema level. Moreover, through the collection and interpretation of ground verification points, the local accuracy of the source product is evaluated using the index Kriging method. Two integration models are developed that combine semantic similarity and local accuracy. Taking NLCD (National Land Cover Database) and FROM-GLC-Seg (Finer Resolution Observation and Monitoring-Global Land Cover-Segmentation) as source products and the second-level class refinement of GlobeLand30 land-cover product as an example, the forest class is subdivided into broad-leaf, coniferous and mixed forest. Results show that the highest accuracies of the second class are 82.6%, 72.0% and 60.0%, respectively, for broad-leaf, coniferous and mixed forest. Full article
(This article belongs to the Special Issue Big Data Integration and Intelligent Information Integration)
Show Figures

Figure 1

22 pages, 6155 KiB  
Article
Research on Estimation Method of Geometric Features of Structured Negative Obstacle Based on Single-Frame 3D Laser Point Cloud
by Xingdong Li, Zhiming Gao, Xiandong Chen, Shufa Sun and Jiuqing Liu
Information 2021, 12(6), 235; https://doi.org/10.3390/info12060235 - 30 May 2021
Cited by 4 | Viewed by 2404
Abstract
A single VLP-16 LiDAR estimation method based on a single-frame 3D laser point cloud is proposed to address the problem of estimating negative obstacles’ geometrical features in structured environments. Firstly, a distance measurement method is developed to determine the estimation range of the [...] Read more.
A single VLP-16 LiDAR estimation method based on a single-frame 3D laser point cloud is proposed to address the problem of estimating negative obstacles’ geometrical features in structured environments. Firstly, a distance measurement method is developed to determine the estimation range of the negative obstacle, which can be used to verify the accuracy of distance estimation. Secondly, the 3D point cloud of a negative obstacle is transformed into a 2D elevation raster image, making the detection and estimation of negative obstacles more intuitive and accurate. Thirdly, we compare the effects of a StatisticalOutlierRemoval filter, RadiusOutlier removal, and Conditional removal on 3D point clouds, and the effects of a Gauss filter, Median filter, and Aver filter on 2D image denoising, and design a flowchart for point cloud and image noise reduction and denoising. Finally, a geometrical feature estimation method is proposed based on the elevation raster image. The negative obstacle image in the raster is used as an auxiliary line, and the number of pixels is derived from the OpenCV-based Progressive Probabilistic Hough Transform to estimate the geometrical features of the negative obstacle based on the raster size. The experimental results show that the algorithm has high accuracy in estimating the geometric characteristics of negative obstacles on structured roads and has a practical application value for LiDAR environment perception research. Full article
Show Figures

Figure 1

16 pages, 1602 KiB  
Article
Product Customer Satisfaction Measurement Based on Multiple Online Consumer Review Features
by Yiming Liu, Yinze Wan, Xiaolian Shen, Zhenyu Ye and Juan Wen
Information 2021, 12(6), 234; https://doi.org/10.3390/info12060234 - 29 May 2021
Cited by 6 | Viewed by 5748
Abstract
With the development of the e-commerce industry, various brands of products with different qualities and functions continuously emerge, and the number of online shopping users is increasing every year. After purchase, users always leave product comments on the platform, which can be used [...] Read more.
With the development of the e-commerce industry, various brands of products with different qualities and functions continuously emerge, and the number of online shopping users is increasing every year. After purchase, users always leave product comments on the platform, which can be used to help consumers choose commodities and help the e-commerce companies better understand the popularity of their goods. At present, the e-commerce platform lacks an effective way to measure customer satisfaction based on various customer comments features. In this paper, our goal is to build a product customer satisfaction measurement by analyzing the relationship between the important attributes of reviews and star ratings. We first use an improved information gain algorithm to analyze the historical reviews and star rating data to find out the most informative words that the purchasers care about. Then, we make hypotheses about the relevant factors of the usefulness of reviews and verify them using linear regression. We finally establish a customer satisfaction measurement based on different review features. We conduct our experiments based on three products with different brands chosen from the Amazon online store. Based on our experiments, we discover that features such as length and extremeness of the comments will affect the review usefulness, and the consumer satisfaction measurement constructed using the exponential moving average method can effectively reflect the trend of user satisfaction over time. Our work can help companies acquire valuable suggestions to improve product features, increase sales, and help customers make wise purchases. Full article
(This article belongs to the Special Issue Personalized Visual Recommendation for E-Commerce)
Show Figures

Figure 1

15 pages, 36464 KiB  
Article
Joint Subtitle Extraction and Frame Inpainting for Videos with Burned-In Subtitles
by Haoran Xu, Yanbai He, Xinya Li, Xiaoying Hu, Chuanyan Hao and Bo Jiang
Information 2021, 12(6), 233; https://doi.org/10.3390/info12060233 - 29 May 2021
Viewed by 4122
Abstract
Subtitles are crucial for video content understanding. However, a large amount of videos have only burned-in, hardcoded subtitles that prevent video re-editing, translation, etc. In this paper, we construct a deep-learning-based system for the inverse conversion of a burned-in subtitle video to a [...] Read more.
Subtitles are crucial for video content understanding. However, a large amount of videos have only burned-in, hardcoded subtitles that prevent video re-editing, translation, etc. In this paper, we construct a deep-learning-based system for the inverse conversion of a burned-in subtitle video to a subtitle file and an inpainted video, by coupling three deep neural networks (CTPN, CRNN, and EdgeConnect). We evaluated the performance of the proposed method and found that the deep learning method achieved high-precision separation of the subtitles and video frames and significantly improved the video inpainting results compared to the existing methods. This research fills a gap in the application of deep learning to burned-in subtitle video reconstruction and is expected to be widely applied in the reconstruction and re-editing of videos with subtitles, advertisements, logos, and other occlusions. Full article
(This article belongs to the Special Issue Recent Advances in Video Compression and Coding)
Show Figures

Figure 1

23 pages, 3807 KiB  
Review
A Comprehensive Survey of Knowledge Graph-Based Recommender Systems: Technologies, Development, and Contributions
by Janneth Chicaiza and Priscila Valdiviezo-Diaz
Information 2021, 12(6), 232; https://doi.org/10.3390/info12060232 - 28 May 2021
Cited by 72 | Viewed by 11322
Abstract
In recent years, the use of recommender systems has become popular on the web. To improve recommendation performance, usage, and scalability, the research has evolved by producing several generations of recommender systems. There is much literature about it, although most proposals focus on [...] Read more.
In recent years, the use of recommender systems has become popular on the web. To improve recommendation performance, usage, and scalability, the research has evolved by producing several generations of recommender systems. There is much literature about it, although most proposals focus on traditional methods’ theories and applications. Recently, knowledge graph-based recommendations have attracted attention in academia and the industry because they can alleviate information sparsity and performance problems. We found only two studies that analyze the recommendation system’s role over graphs, but they focus on specific recommendation methods. This survey attempts to cover a broader analysis from a set of selected papers. In summary, the contributions of this paper are as follows: (1) we explore traditional and more recent developments of filtering methods for a recommender system, (2) we identify and analyze proposals related to knowledge graph-based recommender systems, (3) we present the most relevant contributions using an application domain, and (4) we outline future directions of research in the domain of recommender systems. As the main survey result, we found that the use of knowledge graphs for recommendations is an efficient way to leverage and connect a user’s and an item’s knowledge, thus providing more precise results for users. Full article
(This article belongs to the Collection Knowledge Graphs for Search and Recommendation)
Show Figures

Figure 1

18 pages, 697 KiB  
Article
Reinforcement Learning Page Prediction for Hierarchically Ordered Municipal Websites
by Petri Puustinen, Kostas Stefanidis, Jaana Kekäläinen and Marko Junkkari
Information 2021, 12(6), 231; https://doi.org/10.3390/info12060231 - 28 May 2021
Viewed by 2476
Abstract
Public websites offer information on a variety of topics and services and are accessed by users with varying skills to browse the kind of electronic document repositories. However, the complex website structure and diversity of web browsing behavior create a challenging task for [...] Read more.
Public websites offer information on a variety of topics and services and are accessed by users with varying skills to browse the kind of electronic document repositories. However, the complex website structure and diversity of web browsing behavior create a challenging task for click prediction. This paper presents the results of a novel reinforcement learning approach to model user browsing patterns in a hierarchically ordered municipal website. We study how accurate predictor the browsing history is, when the target pages are not immediate next pages pointed by hyperlinks, but appear a number of levels down the hierarchy. We compare traditional type of baseline classifiers’ performance against our reinforcement learning-based training algorithm. Full article
(This article belongs to the Special Issue Novel Methods and Applications in Natural Language Processing)
Show Figures

Figure 1

16 pages, 3182 KiB  
Article
Deep Hybrid Network for Land Cover Semantic Segmentation in High-Spatial Resolution Satellite Images
by Sultan Daud Khan, Louai Alarabi and Saleh Basalamah
Information 2021, 12(6), 230; https://doi.org/10.3390/info12060230 - 28 May 2021
Cited by 24 | Viewed by 3718
Abstract
Land cover semantic segmentation in high-spatial resolution satellite images plays a vital role in efficient management of land resources, smart agriculture, yield estimation and urban planning. With the recent advancement in remote sensing technologies, such as satellites, drones, UAVs, and airborne vehicles, a [...] Read more.
Land cover semantic segmentation in high-spatial resolution satellite images plays a vital role in efficient management of land resources, smart agriculture, yield estimation and urban planning. With the recent advancement in remote sensing technologies, such as satellites, drones, UAVs, and airborne vehicles, a large number of high-resolution satellite images are readily available. However, these high-resolution satellite images are complex due to increased spatial resolution and data disruption caused by different factors involved in the acquisition process. Due to these challenges, an efficient land-cover semantic segmentation model is difficult to design and develop. In this paper, we develop a hybrid deep learning model that combines the benefits of two deep models, i.e., DenseNet and U-Net. This is carried out to obtain a pixel-wise classification of land cover. The contraction path of U-Net is replaced with DenseNet to extract features of multiple scales, while long-range connections of U-Net concatenate encoder and decoder paths are used to preserve low-level features. We evaluate the proposed hybrid network on a challenging, publicly available benchmark dataset. From the experimental results, we demonstrate that the proposed hybrid network exhibits a state-of-the-art performance and beats other existing models by a considerable margin. Full article
(This article belongs to the Special Issue Big Spatial Data Management)
Show Figures

Figure 1

18 pages, 329 KiB  
Article
MMORPG Evolution Analysis from Explorer and Achiever Perspectives: A Case Study Using the Final Fantasy Series
by Haolan Wang, Zeliang Zhang, Mohd Nor Akmal Khalid, Hiroyuki Iida and Keqiu Li
Information 2021, 12(6), 229; https://doi.org/10.3390/info12060229 - 27 May 2021
Cited by 3 | Viewed by 5698
Abstract
Due to the advent of the Internet, massively multiplayer online role-playing games (MMORPGs) have been enjoyed worldwide by many players simultaneously, and game publishers’ revenues have reached billions of dollars from subscriptions alone. Frequent updates (e.g., versioning) and new contents (e.g., quest system) [...] Read more.
Due to the advent of the Internet, massively multiplayer online role-playing games (MMORPGs) have been enjoyed worldwide by many players simultaneously, and game publishers’ revenues have reached billions of dollars from subscriptions alone. Frequent updates (e.g., versioning) and new contents (e.g., quest system) are the typical strategies adopted by developers to keep MMORPG experiences fresh and attractive. What makes such strategies attractive and retains the interest of players in MMORPGs? This study focuses on one aspect of a popular MMORPG: the player’s experience of the quest systems of Final Fantasy XIV (FF14). The different quest systems were analyzed considering Bartle’s players’ classification, specifically for the explorers and achievers. From an information science perspective, such an analysis can be achieved via game refinement (GR) theory, which formulates the information of the game’s progression into a measurable model of game sophistication. On top of that, we used the concept of motion in mind, which was derived from concepts in physics. It maps game progression information to enable the possible quantification and approximation of players’ mental movements and affective experiences in the game. Based on the analysis of the collected data using the proposed measures of GR and motion in mind, the impact of regular updates on players in long-term games is discussed. Insights from the study provide guidance and suggestions for potential improvements in long-term game design. Full article
(This article belongs to the Special Issue Gamification and Game Studies)
Show Figures

Figure 1

12 pages, 1605 KiB  
Article
Improved Relief Weight Feature Selection Algorithm Based on Relief and Mutual Information
by Hongbin Wang, Pengming Wang, Shengchun Deng and Haoran Li
Information 2021, 12(6), 228; https://doi.org/10.3390/info12060228 - 27 May 2021
Cited by 4 | Viewed by 2485
Abstract
As the classic feature selection algorithm, the Relief algorithm has the advantages of simple computation and high efficiency, but the algorithm itself is limited to only dealing with binary classification problems, and the comprehensive distinguishing ability of the feature subsets composed of the [...] Read more.
As the classic feature selection algorithm, the Relief algorithm has the advantages of simple computation and high efficiency, but the algorithm itself is limited to only dealing with binary classification problems, and the comprehensive distinguishing ability of the feature subsets composed of the former K features selected by the Relief algorithm is often redundant, as the algorithm cannot select the ideal feature subset. When calculating the correlation and redundancy between characteristics by mutual information, the computation speed is slow because of the high computational complexity and the method’s need to calculate the probability density function of the corresponding features. Aiming to solve the above problems, we first improve the weight of the Relief algorithm, so that it can be used to evaluate a set of candidate feature sets. Then we use the improved joint mutual information evaluation function to replace the basic mutual information computation and solve the problem of computation speed and correlation, and redundancy between features. Finally, a compound correlation feature selection algorithm based on Relief and joint mutual information is proposed using the evaluation function and the heuristic sequential forward search strategy. This algorithm can effectively select feature subsets with small redundancy and strong classification characteristics, and has the excellent characteristics of faster calculation speed. Full article
(This article belongs to the Section Artificial Intelligence)
Show Figures

Figure 1

14 pages, 3244 KiB  
Article
Constructing Crop Portraits Based on Graph Databases Is Essential to Agricultural Data Mining
by Yue-Xin Shi, Bo-Kai Zhang, Yong-Xiang Wang, Han-Qian Luo and Xiang Li
Information 2021, 12(6), 227; https://doi.org/10.3390/info12060227 - 27 May 2021
Cited by 5 | Viewed by 2391
Abstract
Neo4j is a graph database that can use not only data, but also data relationships. Crop portraits, a kind of property graph, model the crop entity in the real world based on data to realize the networked management of crop knowledge. The existing [...] Read more.
Neo4j is a graph database that can use not only data, but also data relationships. Crop portraits, a kind of property graph, model the crop entity in the real world based on data to realize the networked management of crop knowledge. The existing crop knowledge base has shortcomings such as single crop variety, incomplete description, and lack of agricultural knowledge. Constructing crop portraits can provide a comprehensive description of crops and make up for these shortcomings. This research used agricultural question-and-answer data and popular science data obtained by text crawling as the original data, selected labels to establish a crop portrait that including three categories (crops, pesticides, and diseases and pests), and used the graph database (Neo4j) to store and display these portrait data. Information mining found that the crop portrait revealed the occurrence trend of diseases and pests, exhibited a nonintrinsic connection between different diseases and pests, and provided a variety of pesticides to choose from for control of diseases and pests. The results showed that constructing crop portraits is beneficial to agricultural analysis, and has practical application values and theoretical research prospects in the field of big data analytics. Full article
Show Figures

Figure 1

Previous Issue
Next Issue
Back to TopTop