Next Issue
Volume 12, August
Previous Issue
Volume 12, June
 
 

Information, Volume 12, Issue 7 (July 2021) – 29 articles

Cover Story (view full-size image): Recurrent Neural Networks are powerful machine learning frameworks that allow data to be saved and referenced in a temporal sequence. This opens many new research possibilities in fields such as biometric authentication and anomaly detection. A few examples of biometric authentication are mouse movement authentication, keystroke authentication, handwritten password authentication and even palm print authentication. Anomaly detection can range from detecting spam emails to malicious network, aviation or maritime vessel traffic. With continued research into these areas, computer scientists are making sure that user data and critical systems are secured with top-level biometric authentication and that people stay safe through novel anomaly detection techniques. View this paper
  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Reader to open them.
Order results
Result details
Section
Select all
Export citation of selected articles as:
18 pages, 6500 KiB  
Article
Deep Hash with Improved Dual Attention for Image Retrieval
by Wenjing Yang, Liejun Wang, Shuli Cheng, Yongming Li and Anyu Du
Information 2021, 12(7), 285; https://doi.org/10.3390/info12070285 - 20 Jul 2021
Cited by 6 | Viewed by 2458
Abstract
Recently, deep learning to hash has extensively been applied to image retrieval, due to its low storage cost and fast query speed. However, there is a defect of insufficiency and imbalance when existing hashing methods utilize the convolutional neural network (CNN) to extract [...] Read more.
Recently, deep learning to hash has extensively been applied to image retrieval, due to its low storage cost and fast query speed. However, there is a defect of insufficiency and imbalance when existing hashing methods utilize the convolutional neural network (CNN) to extract image semantic features and the extracted features do not include contextual information and lack relevance among features. Furthermore, the process of the relaxation hash code can lead to an inevitable quantization error. In order to solve these problems, this paper proposes deep hash with improved dual attention for image retrieval (DHIDA), which chiefly has the following contents: (1) this paper introduces the improved dual attention mechanism (IDA) based on the ResNet18 pre-trained module to extract the feature information of the image, which consists of the position attention module and the channel attention module; (2) when calculating the spatial attention matrix and channel attention matrix, the average value and maximum value of the column of the feature map matrix are integrated in order to promote the feature representation ability and fully leverage the features of each position; and (3) to reduce quantization error, this study designs a new piecewise function to directly guide the discrete binary code. Experiments on CIFAR-10, NUS-WIDE and ImageNet-100 show that the DHIDA algorithm achieves better performance. Full article
Show Figures

Figure 1

14 pages, 1213 KiB  
Article
Research on Behavior Incentives of Prefabricated Building Component Manufacturers
by Pinbo Yao and Hongda Liu
Information 2021, 12(7), 284; https://doi.org/10.3390/info12070284 - 17 Jul 2021
Cited by 16 | Viewed by 2185
Abstract
Based on the positive externalities of prefabricated buildings, this paper constructs an evolutionary game model between the government and material component vendors and analyzes the changes in the behavior of the government and component vendors in different stages of the advancement of prefabricated [...] Read more.
Based on the positive externalities of prefabricated buildings, this paper constructs an evolutionary game model between the government and material component vendors and analyzes the changes in the behavior of the government and component vendors in different stages of the advancement of prefabricated buildings. Based on data modeling and equation prediction analysis, it can be found that the expansion of the incremental cost of construction at the initial stage inhibits the enthusiasm of the government. Thus, the government’s incentive behavior effectively affects the behavior of component vendors, and fiscal taxation and punishment policies will promote component vendors to provide prefabricated components. In the development stage, the government’s fiscal policy influence that weakens and affects component vendors’ behavior mainly comes from the incremental costs and benefits of components. Additionally, the difference between the builder’s incremental cost and the sales revenue narrowed. At this time, the behavior prediction of both parties tends to be steady. In the mature stage, prefabricated buildings will mainly rely on market forces, and the government can gradually withdraw from the market. The cost variable tends to be lower, and it can be predicted that component vendors tend to supply components, while the government tends to restrict policies. Full article
(This article belongs to the Special Issue Data Modeling and Predictive Analytics)
Show Figures

Figure 1

14 pages, 1702 KiB  
Concept Paper
A Theory of Information Trilogy: Digital Ecosystem Information Exchange Architecture
by Asif Qumer Gill
Information 2021, 12(7), 283; https://doi.org/10.3390/info12070283 - 16 Jul 2021
Cited by 3 | Viewed by 2887
Abstract
Information sharing is a critical component of a distributed and multi-actor digital ecosystem (DE). DE actors, individuals and organisations, require seamless, effective, efficient, and secure architecture for exchanging information. Traditional point-to-point and ad hoc integrations hinder the ability of DE actors to do [...] Read more.
Information sharing is a critical component of a distributed and multi-actor digital ecosystem (DE). DE actors, individuals and organisations, require seamless, effective, efficient, and secure architecture for exchanging information. Traditional point-to-point and ad hoc integrations hinder the ability of DE actors to do so. The challenge is figuring out how to enable information sharing in a complex DE. This paper addresses this important research challenge and proposes the theory of information trilogy and conceptual DE information exchange architecture, which is inspired by the study of nature and flow of matter, energy, and its states in natural ecosystems. This work is a part of the large DE information framework. The scope of this paper is limited to the emerging concept of DE information exchange. The application of the DE information exchange concept is demonstrated with the help of a geospatial information sharing case study example. The results from this paper can be used by researchers and practitioners for defining the DE information exchange as appropriate to their context. This work also complements Shannon’s mathematical theory of communication. Full article
(This article belongs to the Special Issue Architecting Digital Information Ecosystems)
Show Figures

Figure 1

11 pages, 436 KiB  
Article
Consumer Identity and Loyalty in Electronic Product Offline Brand Operation: The Moderator Effect of Fanship
by Yitao Chen, Haijian Wang, Lei Wang and Jianyi Ding
Information 2021, 12(7), 282; https://doi.org/10.3390/info12070282 - 16 Jul 2021
Cited by 5 | Viewed by 2649
Abstract
Continuous enhancements of the intelligence of electronic products can lead to the homogenization of products and innovation of offline experiential marketing modes. The diversified development of brand sales channels is inevitable, to fulfill the diversified shopping demands of consumers. Based on 226 valid [...] Read more.
Continuous enhancements of the intelligence of electronic products can lead to the homogenization of products and innovation of offline experiential marketing modes. The diversified development of brand sales channels is inevitable, to fulfill the diversified shopping demands of consumers. Based on 226 valid questionnaires, this study conducts empirical research with SPSS and AMOS to examine the impact of experience characteristics on consumer brand identity and brand loyalty. Then, the fanship consumer attribute is added to conduct path-moderating analysis. The results illustrated the following: (a) consumers act and relate experiences, which affects brand cognitive identity; thinking, acting, and relating experiences positively affect brands’ emotional identity; (b) cognitive identity and emotional identity can jointly create brand loyalty, and play a partial mediating role between offline experience and brand loyalty. Finally, the higher the fanship, the higher the consumer identity and the higher the brand loyalty. Overall, this study provides a certain basis for decision-making and suggestions for the offline operation of electronic brands. Full article
Show Figures

Figure 1

17 pages, 4470 KiB  
Article
Dynamic Optimal Travel Strategies in Intelligent Stochastic Transit Networks
by Agostino Nuzzolo and Antonio Comi
Information 2021, 12(7), 281; https://doi.org/10.3390/info12070281 - 13 Jul 2021
Cited by 15 | Viewed by 2277
Abstract
This paper addresses the search for a run-based dynamic optimal travel strategy, to be supplied through mobile devices (apps) to travelers on a stochastic multiservice transit network, which includes a system forecasting of bus travel times and bus arrival times at stops. The [...] Read more.
This paper addresses the search for a run-based dynamic optimal travel strategy, to be supplied through mobile devices (apps) to travelers on a stochastic multiservice transit network, which includes a system forecasting of bus travel times and bus arrival times at stops. The run-based optimal strategy is obtained as a heuristic solution to a Markovian decision problem. The hallmarks of this paper are the proposals to use only traveler state spaces and estimates of dispersion of forecast bus arrival times at stops in order to determine transition probabilities. The first part of the paper analyses some existing line-based and run-based optimal strategy search methods. In the second part, some aspects of dynamic transition probability computation in intelligent transit systems are presented, and a new method for dynamic run-based optimal strategy search is proposed and applied. Full article
Show Figures

Figure 1

13 pages, 647 KiB  
Article
Moderating Effect of Gender on the Relationship between Technology Readiness Index and Consumers’ Continuous Use Intention of Self-Service Restaurant Kiosks
by Tae-Kyun Na, Sun-Ho Lee and Jae-Yeon Yang
Information 2021, 12(7), 280; https://doi.org/10.3390/info12070280 - 10 Jul 2021
Cited by 14 | Viewed by 5977
Abstract
This study aims to analyze the moderating effect of gender on the relationship between technology readiness and willingness to continue using self-service kiosks in fast-food restaurants among middle-aged and older consumers. We conducted a survey from 1 May to 30 May 2020 among [...] Read more.
This study aims to analyze the moderating effect of gender on the relationship between technology readiness and willingness to continue using self-service kiosks in fast-food restaurants among middle-aged and older consumers. We conducted a survey from 1 May to 30 May 2020 among 320 consumers born in or before 1980 who only used kiosks in fast-food restaurants. The findings are as follows: First, the more innovative and optimistic the consumer, the more they are willing to continue using kiosks, whereas the more discomfort the consumer feels, the less likely they are to continue using them. Second, among technology readiness factors, a sense of insecurity does not have a significant effect on the willingness to continue to use kiosks. Third, among innovative consumers, men were found to be more likely to continue using kiosks than women. Thus, fast-food restaurant managers need to know that men and women perceive technology-based self-service differently. Full article
Show Figures

Figure 1

11 pages, 794 KiB  
Article
Improving Physical Layer Security of Cooperative NOMA System with Wireless-Powered Full-Duplex Relaying
by Yuan Ren, Yixuan Tan, Meruyert Makhanbet and Xuewei Zhang
Information 2021, 12(7), 279; https://doi.org/10.3390/info12070279 - 10 Jul 2021
Cited by 7 | Viewed by 2474
Abstract
Non-orthogonal multiple access (NOMA) and wireless energy harvesting are two promising technologies for improving spectral efficiency and energy efficiency, respectively. In this paper, we study the physical layer security of a wireless-powered full-duplex (FD) relay-aided cooperative NOMA system. In particular, the source is [...] Read more.
Non-orthogonal multiple access (NOMA) and wireless energy harvesting are two promising technologies for improving spectral efficiency and energy efficiency, respectively. In this paper, we study the physical layer security of a wireless-powered full-duplex (FD) relay-aided cooperative NOMA system. In particular, the source is wiretapped by an eavesdropper, and the FD relay assists the transmission from the source to a near user and a far user with self-energy recycling. To enhance the security performance of the system, we propose an artificial noise (AN)-aided cooperative transmission scheme, in which the relay emits a jamming signal to confuse the eavesdropper while receiving the signal from the source. For the proposed scheme, the ergodic secrecy sum rate (ESSR) is derived to characterize the secrecy performance and a lower bound of ESSR is obtained. Finally, numerical results verify the accuracy of the theoretical analysis of the proposed AN-aided secure transmission scheme. The superiority of the proposed scheme is also demonstrated since this scheme can achieve better secrecy performance, compared to the conventional cooperative NOMA scheme. Full article
(This article belongs to the Special Issue Secure and Trustworthy Cyber–Physical Systems)
Show Figures

Figure 1

14 pages, 20714 KiB  
Article
Research on Generation Method of Grasp Strategy Based on DeepLab V3+ for Three-Finger Gripper
by Sanlong Jiang, Shaobo Li, Qiang Bai, Jing Yang, Yanming Miao and Leiyu Chen
Information 2021, 12(7), 278; https://doi.org/10.3390/info12070278 - 08 Jul 2021
Cited by 2 | Viewed by 2098
Abstract
A reasonable grasping strategy is a prerequisite for the successful grasping of a target, and it is also a basic condition for the wide application of robots. Presently, mainstream grippers on the market are divided into two-finger grippers and three-finger grippers. According to [...] Read more.
A reasonable grasping strategy is a prerequisite for the successful grasping of a target, and it is also a basic condition for the wide application of robots. Presently, mainstream grippers on the market are divided into two-finger grippers and three-finger grippers. According to human grasping experience, the stability of three-finger grippers is much better than that of two-finger grippers. Therefore, this paper’s focus is on the three-finger grasping strategy generation method based on the DeepLab V3+ algorithm. DeepLab V3+ uses the atrous convolution kernel and the atrous spatial pyramid pooling (ASPP) architecture based on atrous convolution. The atrous convolution kernel can adjust the field-of-view of the filter layer by changing the convolution rate. In addition, ASPP can effectively capture multi-scale information, based on the parallel connection of multiple convolution rates of atrous convolutional layers, so that the model performs better on multi-scale objects. The article innovatively uses the DeepLab V3+ algorithm to generate the grasp strategy of a target and optimizes the atrous convolution parameter values of ASPP. This study used the Cornell Grasp dataset to train and verify the model. At the same time, a smaller and more complex dataset of 60 was produced according to the actual situation. Upon testing, good experimental results were obtained. Full article
(This article belongs to the Special Issue Intelligent Control and Robotics)
Show Figures

Figure 1

24 pages, 5233 KiB  
Article
Formalizing the Blockchain-Based BlockVoke Protocol for Fast Certificate Revocation Using Colored Petri Nets
by Anant Sujatanagarjuna, Arne Bochem and Benjamin Leiding
Information 2021, 12(7), 277; https://doi.org/10.3390/info12070277 - 06 Jul 2021
Cited by 3 | Viewed by 2389
Abstract
Protocol flaws such as the well-known Heartbleed bug, security and privacy issues or incomplete specifications, in general, pose risks to the direct users of a protocol and further stakeholders. Formal methods, such as Colored Petri Nets (CPNs), facilitate the design, development, analysis and [...] Read more.
Protocol flaws such as the well-known Heartbleed bug, security and privacy issues or incomplete specifications, in general, pose risks to the direct users of a protocol and further stakeholders. Formal methods, such as Colored Petri Nets (CPNs), facilitate the design, development, analysis and verification of new protocols; the detection of flaws; and the mitigation of identified security risks. BlockVoke is a blockchain-based scheme that decentralizes certificate revocations, allows certificate owners and certificate authorities to revoke certificates and rapidly distributes revocation information. CPNs in particular are well-suited to formalize blockchain-based protocols—thus, in this work, we formalize the BlockVoke protocol using CPNs, resulting in a verifiable CPN model and a formal specification of the protocol. We utilize an agent-oriented modeling (AOM) methodology to create goal models and corresponding behavior interface models of BlockVoke. Subsequently, protocols semantics are defined, and the CPN models are derived and implemented using CPN Tools. Moreover, a full state-space analysis of the resulting CPN model is performed to derive relevant model properties of the protocol. The result is a complete and correct formal BlockVoke specification used to guide future implementations and security assessments. Full article
(This article belongs to the Special Issue Secure Protocols for Future Technologies)
Show Figures

Figure 1

20 pages, 2588 KiB  
Article
A Simple Approach to Relating the Optimal Learning and the Meaningful Learning Experience in Students Age 14–16
by Ma. Guadalupe Díaz de León-López, María de Lourdes Velázquez-Sánchez, Silvia Sánchez-Madrid and José Manuel Olais-Govea
Information 2021, 12(7), 276; https://doi.org/10.3390/info12070276 - 05 Jul 2021
Cited by 1 | Viewed by 2562
Abstract
Using a questionnaire applied in real time to students in stages 14–16 during a distance class, the authors appraise whether they experience feelings that lead to a central experience of flow, according to the flow theory of positive psychology. Students are exposed to [...] Read more.
Using a questionnaire applied in real time to students in stages 14–16 during a distance class, the authors appraise whether they experience feelings that lead to a central experience of flow, according to the flow theory of positive psychology. Students are exposed to a planned session that considers the moments of the training sequence and consciously integrates technological tools to support learning. A formal evaluation system, which includes formative and summative evaluations, determines if students build meaningful learning. This research contributes to understanding that an optimal learning experience characterized by the pedagogical principles of curiosity, concentration, challenge, and enjoyment, favor the construction of meaningful learning. Furthermore, the simplicity of the proposed experimental design suggests a direct way to replicate the study in later learning stages and assess the efficiency of new technology-based pedagogies within the distance education paradigm imposed by the 2020 pandemic crisis. Full article
Show Figures

Figure 1

30 pages, 742 KiB  
Article
Corporate Governance of Artificial Intelligence in the Public Interest
by Peter Cihon, Jonas Schuett and Seth D. Baum
Information 2021, 12(7), 275; https://doi.org/10.3390/info12070275 - 05 Jul 2021
Cited by 21 | Viewed by 12647
Abstract
Corporations play a major role in artificial intelligence (AI) research, development, and deployment, with profound consequences for society. This paper surveys opportunities to improve how corporations govern their AI activities so as to better advance the public interest. The paper focuses on the [...] Read more.
Corporations play a major role in artificial intelligence (AI) research, development, and deployment, with profound consequences for society. This paper surveys opportunities to improve how corporations govern their AI activities so as to better advance the public interest. The paper focuses on the roles of and opportunities for a wide range of actors inside the corporation—managers, workers, and investors—and outside the corporation—corporate partners and competitors, industry consortia, nonprofit organizations, the public, the media, and governments. Whereas prior work on multistakeholder AI governance has proposed dedicated institutions to bring together diverse actors and stakeholders, this paper explores the opportunities they have even in the absence of dedicated multistakeholder institutions. The paper illustrates these opportunities with many cases, including the participation of Google in the U.S. Department of Defense Project Maven; the publication of potentially harmful AI research by OpenAI, with input from the Partnership on AI; and the sale of facial recognition technology to law enforcement by corporations including Amazon, IBM, and Microsoft. These and other cases demonstrate the wide range of mechanisms to advance AI corporate governance in the public interest, especially when diverse actors work together. Full article
(This article belongs to the Section Artificial Intelligence)
Show Figures

Figure 1

15 pages, 385 KiB  
Article
Change Point Detection in Terrorism-Related Online Content Using Deep Learning Derived Indicators
by Ourania Theodosiadou, Kyriaki Pantelidou, Nikolaos Bastas, Despoina Chatzakou, Theodora Tsikrika, Stefanos Vrochidis and Ioannis Kompatsiaris
Information 2021, 12(7), 274; https://doi.org/10.3390/info12070274 - 02 Jul 2021
Cited by 11 | Viewed by 3055
Abstract
Given the increasing occurrence of deviant activities in online platforms, it is of paramount importance to develop methods and tools that allow in-depth analysis and understanding to then develop effective countermeasures. This work proposes a framework towards detecting statistically significant change points in [...] Read more.
Given the increasing occurrence of deviant activities in online platforms, it is of paramount importance to develop methods and tools that allow in-depth analysis and understanding to then develop effective countermeasures. This work proposes a framework towards detecting statistically significant change points in terrorism-related time series, which may indicate the occurrence of events to be paid attention to. These change points may reflect changes in the attitude towards and/or engagement with terrorism-related activities and events, possibly signifying, for instance, an escalation in the radicalization process. In particular, the proposed framework involves: (i) classification of online textual data as terrorism- and hate speech-related, which can be considered as indicators of a potential criminal or terrorist activity; and (ii) change point analysis in the time series generated by these data. The use of change point detection (CPD) algorithms in the produced time series of the aforementioned indicators—either in a univariate or two-dimensional case—can lead to the estimation of statistically significant changes in their structural behavior at certain time locations. To evaluate the proposed framework, we apply it on a publicly available dataset related to jihadist forums. Finally, topic detection on the estimated change points is implemented to further assess its effectiveness. Full article
(This article belongs to the Special Issue Predictive Analytics and Illicit Activities)
Show Figures

Figure 1

11 pages, 3214 KiB  
Article
Environment Monitoring System of Dairy Cattle Farming Based on Multi Parameter Fusion
by Yunlong Qu, Guiling Sun, Bowen Zheng and Wang Liu
Information 2021, 12(7), 273; https://doi.org/10.3390/info12070273 - 01 Jul 2021
Cited by 5 | Viewed by 2509
Abstract
Aiming at the difficulty in obtaining environmental parameters in dairy cattle breeding, this paper proposes and implements a dairy cattle breeding environment monitoring system based on Bluetooth and B/S architecture. In order to reduce the cost of cross-platform deployment, the overall system adopts [...] Read more.
Aiming at the difficulty in obtaining environmental parameters in dairy cattle breeding, this paper proposes and implements a dairy cattle breeding environment monitoring system based on Bluetooth and B/S architecture. In order to reduce the cost of cross-platform deployment, the overall system adopts the B/S architecture and introduces a Bootstrap responsive layout; in order to improve the human–computer interaction capabilities, the Echarts graphical plug-in is introduced; and in order to enhance the stability of Bluetooth communication, a time-sharing connection mechanism and sampling are designed along with a cycle adaptive adjustment mechanism. The experimental results show that the system has a good user experience on various smart terminal devices. The time-sharing connection mechanism solves the repeated disconnection problem under the Bluetooth one-master, multiple-slave star connection. The system can be used in the dairy cow growth environment. With real-time monitoring and accurate early warning, it reduces the deployment and use cost of the system and has broad application prospects. Full article
(This article belongs to the Section Information Applications)
Show Figures

Figure 1

20 pages, 4532 KiB  
Review
Applications of Recurrent Neural Network for Biometric Authentication & Anomaly Detection
by Joseph M. Ackerson, Rushit Dave and Naeem Seliya
Information 2021, 12(7), 272; https://doi.org/10.3390/info12070272 - 01 Jul 2021
Cited by 43 | Viewed by 6885
Abstract
Recurrent Neural Networks are powerful machine learning frameworks that allow for data to be saved and referenced in a temporal sequence. This opens many new possibilities in fields such as handwriting analysis and speech recognition. This paper seeks to explore current research being [...] Read more.
Recurrent Neural Networks are powerful machine learning frameworks that allow for data to be saved and referenced in a temporal sequence. This opens many new possibilities in fields such as handwriting analysis and speech recognition. This paper seeks to explore current research being conducted on RNNs in four very important areas, being biometric authentication, expression recognition, anomaly detection, and applications to aircraft. This paper reviews the methodologies, purpose, results, and the benefits and drawbacks of each proposed method below. These various methodologies all focus on how they can leverage distinct RNN architectures such as the popular Long Short-Term Memory (LSTM) RNN or a Deep-Residual RNN. This paper also examines which frameworks work best in certain situations, and the advantages and disadvantages of each proposed model. Full article
(This article belongs to the Section Review)
Show Figures

Figure 1

31 pages, 469 KiB  
Review
Challenges, Techniques, and Trends of Simple Knowledge Graph Question Answering: A Survey
by Mohammad Yani and Adila Alfa Krisnadhi
Information 2021, 12(7), 271; https://doi.org/10.3390/info12070271 - 30 Jun 2021
Cited by 17 | Viewed by 6288
Abstract
Simple questions are the most common type of questions used for evaluating a knowledge graph question answering (KGQA). A simple question is a question whose answer can be captured by a factoid statement with one relation or predicate. Knowledge graph question answering (KGQA) [...] Read more.
Simple questions are the most common type of questions used for evaluating a knowledge graph question answering (KGQA). A simple question is a question whose answer can be captured by a factoid statement with one relation or predicate. Knowledge graph question answering (KGQA) systems are systems whose aim is to automatically answer natural language questions (NLQs) over knowledge graphs (KGs). There are varieties of researches with different approaches in this area. However, the lack of a comprehensive study to focus on addressing simple questions from all aspects is tangible. In this paper, we present a comprehensive survey of answering simple questions to classify available techniques and compare their advantages and drawbacks in order to have better insights of existing issues and recommendations to direct future works. Full article
(This article belongs to the Collection Knowledge Graphs for Search and Recommendation)
Show Figures

Figure 1

17 pages, 745 KiB  
Article
Can Spa Tourism Enhance Water Resources and Turn Them into a National Brand? A Theoretical Review about the Romanian Case
by Puiu Nistoreanu and Alina-Cerasela Aluculesei
Information 2021, 12(7), 270; https://doi.org/10.3390/info12070270 - 30 Jun 2021
Cited by 12 | Viewed by 3371
Abstract
The present article includes descriptive research about how water resources in Romanian medical spas could be better promoted to increase their visibility. Romania is one of the European countries with impressive potential in terms of balneology, having a wide diversity of natural factors [...] Read more.
The present article includes descriptive research about how water resources in Romanian medical spas could be better promoted to increase their visibility. Romania is one of the European countries with impressive potential in terms of balneology, having a wide diversity of natural factors that allow treating several medical conditions in the same resort. In addition, one-third of the mineral and thermal water springs in Europe are present on the Romanian territory, making Romania one of the most important European destinations in terms of natural spa resources. The present research aims to illustrate how the Romanian medical spas communicate with tourists about the therapeutic water available in five medical spas: Băile Felix-1 Mai, Techirghiol, Băile Tușnad, Sovata and Covasna, having as its main objective to raise the awareness among the spas representatives regarding the necessity of water management implementation. The research is based on primary data obtained from the official websites of the resorts included in the study and in the published scholarly articles that approached the Romanian medical spas. Full article
(This article belongs to the Special Issue Enhancement of Local Resources through Tourism Activities)
Show Figures

Figure 1

11 pages, 1003 KiB  
Article
Anticipation Next: System-Sensitive Technology Development and Integration in Work Contexts
by Sarah Janböcke and Susanne Zajitschek
Information 2021, 12(7), 269; https://doi.org/10.3390/info12070269 - 29 Jun 2021
Cited by 1 | Viewed by 1903
Abstract
When discussing future concerns within socio-technical systems in work contexts, we often find descriptions of missed technology development and integration. The experience of technology that fails whilst being integrated is often rooted in dysfunctional epistemological approaches within the research and development process. Thus, [...] Read more.
When discussing future concerns within socio-technical systems in work contexts, we often find descriptions of missed technology development and integration. The experience of technology that fails whilst being integrated is often rooted in dysfunctional epistemological approaches within the research and development process. Thus, ultimately leading to sustainable technology-distrust in work contexts. This is true for organizations that integrate new technologies and for organizations that invent them. Organizations in which we find failed technology development and integrations are, in their very nature, social systems. Nowadays, those complex social systems act within an even more complex environment. This urges the development of new anticipation methods for technology development and integration. Gathering of and dealing with complex information in the described context is what we call Anticipation Next. This explorative work uses existing literature from the adjoining research fields of system theory, organizational theory, and socio-technical research to combine various concepts. We deliberately aim at a networked way of thinking in scientific contexts and thus combine multidisciplinary subject areas in one paper to present an innovative way to deal with multi-faceted problems in a human-centred way. We end with suggesting a conceptual framework that should be used in the very early stages of technology development and integration in work contexts. Full article
(This article belongs to the Special Issue Big Data Integration and Intelligent Information Integration)
Show Figures

Figure 1

18 pages, 1034 KiB  
Article
Distributed Hypothesis Testing over Noisy Broadcast Channels
by Sadaf Salehkalaibar and Michèle Wigger
Information 2021, 12(7), 268; https://doi.org/10.3390/info12070268 - 29 Jun 2021
Cited by 3 | Viewed by 1761
Abstract
This paper studies binary hypothesis testing with a single sensor that communicates with two decision centers over a memoryless broadcast channel. The main focus lies on the tradeoff between the two type-II error exponents achievable at the two decision centers. In our proposed [...] Read more.
This paper studies binary hypothesis testing with a single sensor that communicates with two decision centers over a memoryless broadcast channel. The main focus lies on the tradeoff between the two type-II error exponents achievable at the two decision centers. In our proposed scheme, we can partially mitigate this tradeoff when the transmitter has a probability larger than 1/2 to distinguish the alternate hypotheses at the decision centers, i.e., the hypotheses under which the decision centers wish to maximize their error exponents. In the cases where these hypotheses cannot be distinguished at the transmitter (because both decision centers have the same alternative hypothesis or because the transmitter’s observations have the same marginal distribution under both hypotheses), our scheme shows an important tradeoff between the two exponents. The results in this paper thus reinforce the previous conclusions drawn for a setup where communication is over a common noiseless link. Compared to such a noiseless scenario, here, however, we observe that even when the transmitter can distinguish the two hypotheses, a small exponent tradeoff can persist, simply because the noise in the channel prevents the transmitter to perfectly describe its guess of the hypothesis to the two decision centers. Full article
(This article belongs to the Special Issue Statistical Communication and Information Theory)
Show Figures

Figure 1

12 pages, 247 KiB  
Article
The Human Digitalisation Journey: Technology First at the Expense of Humans?
by Hossein Hassani, Xu Huang and Emmanuel Silva
Information 2021, 12(7), 267; https://doi.org/10.3390/info12070267 - 29 Jun 2021
Cited by 9 | Viewed by 5594
Abstract
The ongoing COVID-19 pandemic has enhanced the impact of digitalisation as a driver of transformation and advancements across almost every aspect of human life. With the majority actively embracing smart technologies and their benefits, the journey of human digitalisation has begun. Will human [...] Read more.
The ongoing COVID-19 pandemic has enhanced the impact of digitalisation as a driver of transformation and advancements across almost every aspect of human life. With the majority actively embracing smart technologies and their benefits, the journey of human digitalisation has begun. Will human beings continue to remain solitary unaffected beings in the middle of the whirlpool—a gateway to the completely digitalised future? This journey of human digitalisation probably started much earlier, before we even realised. This paper, in the format of an objective review and discussion, aims to investigate the journey of human digitalisation, explore the reality of domination between technology and humans, provide a better understanding of the human value and human vulnerability in this fast transforming digital era, so as to achieve valuable and insightful suggestion on the future direction of the human digitalisation journey. Full article
20 pages, 3032 KiB  
Article
Improving Imbalanced Land Cover Classification with K-Means SMOTE: Detecting and Oversampling Distinctive Minority Spectral Signatures
by Joao Fonseca, Georgios Douzas and Fernando Bacao
Information 2021, 12(7), 266; https://doi.org/10.3390/info12070266 - 29 Jun 2021
Cited by 11 | Viewed by 2986
Abstract
Land cover maps are a critical tool to support informed policy development, planning, and resource management decisions. With significant upsides, the automatic production of Land Use/Land Cover maps has been a topic of interest for the remote sensing community for several years, but [...] Read more.
Land cover maps are a critical tool to support informed policy development, planning, and resource management decisions. With significant upsides, the automatic production of Land Use/Land Cover maps has been a topic of interest for the remote sensing community for several years, but it is still fraught with technical challenges. One such challenge is the imbalanced nature of most remotely sensed data. The asymmetric class distribution impacts negatively the performance of classifiers and adds a new source of error to the production of these maps. In this paper, we address the imbalanced learning problem, by using K-means and the Synthetic Minority Oversampling Technique (SMOTE) as an improved oversampling algorithm. K-means SMOTE improves the quality of newly created artificial data by addressing both the between-class imbalance, as traditional oversamplers do, but also the within-class imbalance, avoiding the generation of noisy data while effectively overcoming data imbalance. The performance of K-means SMOTE is compared to three popular oversampling methods (Random Oversampling, SMOTE and Borderline-SMOTE) using seven remote sensing benchmark datasets, three classifiers (Logistic Regression, K-Nearest Neighbors and Random Forest Classifier) and three evaluation metrics using a five-fold cross-validation approach with three different initialization seeds. The statistical analysis of the results show that the proposed method consistently outperforms the remaining oversamplers producing higher quality land cover classifications. These results suggest that LULC data can benefit significantly from the use of more sophisticated oversamplers as spectral signatures for the same class can vary according to geographical distribution. Full article
(This article belongs to the Special Issue Remote Sensing and Spatial Data Science)
Show Figures

Figure 1

17 pages, 6644 KiB  
Article
Visual Active Learning for Labeling: A Case for Soundscape Ecology Data
by Liz Huancapaza Hilasaca, Milton Cezar Ribeiro and Rosane Minghim
Information 2021, 12(7), 265; https://doi.org/10.3390/info12070265 - 29 Jun 2021
Cited by 1 | Viewed by 2091
Abstract
Labeling of samples is a recurrent and time-consuming task in data analysis and machine learning and yet generally overlooked in terms of visual analytics approaches to improve the process. As the number of tailored applications of learning models increases, it is crucial that [...] Read more.
Labeling of samples is a recurrent and time-consuming task in data analysis and machine learning and yet generally overlooked in terms of visual analytics approaches to improve the process. As the number of tailored applications of learning models increases, it is crucial that more effective approaches to labeling are developed. In this paper, we report the development of a methodology and a framework to support labeling, with an application case as background. The methodology performs visual active learning and label propagation with 2D embeddings as layouts to achieve faster and interactive labeling of samples. The framework is realized through SoundscapeX, a tool to support labeling in soundscape ecology data. We have applied the framework to a set of audio recordings collected for a Long Term Ecological Research Project in the Cantareira-Mantiqueira Corridor (LTER CCM), localized in the transition between northeastern São Paulo state and southern Minas Gerais state in Brazil. We employed a pre-label data set of groups of animals to test the efficacy of the approach. The results showed the best accuracy at 94.58% in the prediction of labeling for birds and insects; and 91.09% for the prediction of the sound event as frogs and insects. Full article
(This article belongs to the Special Issue Trends and Opportunities in Visualization and Visual Analytics)
Show Figures

Figure 1

18 pages, 4166 KiB  
Article
Combine-Net: An Improved Filter Pruning Algorithm
by Jinghan Wang, Guangyue Li and Wenzhao Zhang
Information 2021, 12(7), 264; https://doi.org/10.3390/info12070264 - 29 Jun 2021
Cited by 2 | Viewed by 2803
Abstract
The powerful performance of deep learning is evident to all. With the deepening of research, neural networks have become more complex and not easily generalized to resource-constrained devices. The emergence of a series of model compression algorithms makes artificial intelligence on edge possible. [...] Read more.
The powerful performance of deep learning is evident to all. With the deepening of research, neural networks have become more complex and not easily generalized to resource-constrained devices. The emergence of a series of model compression algorithms makes artificial intelligence on edge possible. Among them, structured model pruning is widely utilized because of its versatility. Structured pruning prunes the neural network itself and discards some relatively unimportant structures to compress the model’s size. However, in the previous pruning work, problems such as evaluation errors of networks, empirical determination of pruning rate, and low retraining efficiency remain. Therefore, we propose an accurate, objective, and efficient pruning algorithm—Combine-Net, introducing Adaptive BN to eliminate evaluation errors, the Kneedle algorithm to determine the pruning rate objectively, and knowledge distillation to improve the efficiency of retraining. Results show that, without precision loss, Combine-Net achieves 95% parameter compression and 83% computation compression on VGG16 on CIFAR10, 71% of parameter compression and 41% computation compression on ResNet50 on CIFAR100. Experiments on different datasets and models have proved that Combine-Net can efficiently compress the neural network’s parameters and computation. Full article
(This article belongs to the Special Issue Artificial Intelligence on the Edge)
Show Figures

Figure 1

15 pages, 2201 KiB  
Article
Identification of Fake Stereo Audio Using SVM and CNN
by Tianyun Liu, Diqun Yan, Rangding Wang, Nan Yan and Gang Chen
Information 2021, 12(7), 263; https://doi.org/10.3390/info12070263 - 28 Jun 2021
Cited by 20 | Viewed by 3525
Abstract
The number of channels is one of the important criteria in regard to digital audio quality. Generally, stereo audio with two channels can provide better perceptual quality than mono audio. To seek illegal commercial benefit, one might convert a mono audio system to [...] Read more.
The number of channels is one of the important criteria in regard to digital audio quality. Generally, stereo audio with two channels can provide better perceptual quality than mono audio. To seek illegal commercial benefit, one might convert a mono audio system to stereo with fake quality. Identifying stereo-faking audio is a lesser-investigated audio forensic issue. In this paper, a stereo faking corpus is first presented, which is created using the Haas effect technique. Two identification algorithms for fake stereo audio are proposed. One is based on Mel-frequency cepstral coefficient features and support vector machines. The other is based on a specially designed five-layer convolutional neural network. The experimental results on two datasets with five different cut-off frequencies show that the proposed algorithm can effectively detect stereo-faking audio and has good robustness. Full article
Show Figures

Figure 1

11 pages, 362 KiB  
Article
An Online Iterative Linear Quadratic Approach for a Satisfactory Working Point Attainment at FERMI
by Niky Bruchon, Gianfranco Fenu, Giulio Gaio, Simon Hirlander, Marco Lonza, Felice Andrea Pellegrino and Erica Salvato
Information 2021, 12(7), 262; https://doi.org/10.3390/info12070262 - 26 Jun 2021
Viewed by 1815
Abstract
The attainment of a satisfactory operating point is one of the main problems in the tuning of particle accelerators. These are extremely complex facilities, characterized by the absence of a model that accurately describes their dynamics, and by an often persistent noise which, [...] Read more.
The attainment of a satisfactory operating point is one of the main problems in the tuning of particle accelerators. These are extremely complex facilities, characterized by the absence of a model that accurately describes their dynamics, and by an often persistent noise which, along with machine drifts, affects their behaviour in unpredictable ways. In this paper, we propose an online iterative Linear Quadratic Regulator (iLQR) approach to tackle this problem on the FERMI free-electron laser of Elettra Sincrotrone Trieste. It consists of a model identification performed by a neural network trained on data collected from the real facility, followed by the application of the iLQR in a Model-Predictive Control fashion. We perform several experiments, training the neural network with increasing amount of data, in order to understand what level of model accuracy is needed to accomplish the task. We empirically show that the online iLQR results, on average, in fewer steps than a simple gradient ascent (GA), and requires a less accurate neural network to achieve the goal. Full article
(This article belongs to the Special Issue Machine Learning and Accelerator Technology)
Show Figures

Figure 1

12 pages, 1657 KiB  
Article
Creative Intervention for Acrophobia Sufferers through AIVE Concept
by Al Hamidy Hazidar, Riza Sulaiman, Shalisah Sharip, Meutia Wardhanie Ganie, Azlin Baharudin, Hamzaini Abdul Hamid and Norshita Mat Nayan
Information 2021, 12(7), 261; https://doi.org/10.3390/info12070261 - 26 Jun 2021
Viewed by 1860
Abstract
This research applies exposure to the visual appearance technology of virtual reality (VR). The motivation for this research is to generate a creative intervention by using regular smartphone devices and implementing them in VR using Google Cardboard as a medium visual display for [...] Read more.
This research applies exposure to the visual appearance technology of virtual reality (VR). The motivation for this research is to generate a creative intervention by using regular smartphone devices and implementing them in VR using Google Cardboard as a medium visual display for exposure therapy at high altitudes. The VR application in this research is called acrophobia immersive virtual exposure (AIVE), which utilizes the Unity3D software to develop this treatment therapy application. The utilization of exposure therapy was carried out as a therapeutic medium for acrophobia sufferers. A commissioner was given to measure the usefulness of applications and devices in the VR environment created, and as many as 20 users had tested the VR device. The existing questionnaire was revised to develop a questionnaire for acrophobia sufferers, which was then used as an index measurement in the VR environment. The research is expected to be used to design a simulator and as a therapeutic medium using immersive VR devices in future studies. Full article
Show Figures

Figure 1

17 pages, 435 KiB  
Article
From Potential to Real Threat? The Impacts of Technology Attributes on Licensing Competition—Evidence from China during 2002–2013
by Ming Li, Jason Li-Ying, Yuandi Wang and Xiangdong Chen
Information 2021, 12(7), 260; https://doi.org/10.3390/info12070260 - 24 Jun 2021
Viewed by 1603
Abstract
Prior studies have extensively discussed firms’ propensity of licensing under different levels of competition. This study clarifies the differences between potential technology competition (PTC) and actual licensing competition (ALC). We investigate the relationship between these two types of competition in the context of [...] Read more.
Prior studies have extensively discussed firms’ propensity of licensing under different levels of competition. This study clarifies the differences between potential technology competition (PTC) and actual licensing competition (ALC). We investigate the relationship between these two types of competition in the context of Chinese patent licensing landscape, using patent licensing data during 2002–2013. We find that the positive effect of PTC on ALC is contingent upon the nature of licensed patent, such as generality, complexity, and newness. Our findings help scholars and managers interested in licensing to understand and monitor the likelihood of licensing competition. Policy implications are presented at the end of this study. Full article
Show Figures

Figure 1

18 pages, 739 KiB  
Article
Content Management Systems Performance and Compliance Assessment Based on a Data-Driven Search Engine Optimization Methodology
by Ioannis Drivas, Dimitrios Kouis, Daphne Kyriaki-Manessi and Georgios Giannakopoulos
Information 2021, 12(7), 259; https://doi.org/10.3390/info12070259 - 24 Jun 2021
Cited by 4 | Viewed by 4381
Abstract
While digitalization of cultural organizations is in full swing and growth, it is common knowledge that websites can be used as a beacon to expand the awareness and consideration of their services on the Web. Nevertheless, recent research results indicate the managerial difficulties [...] Read more.
While digitalization of cultural organizations is in full swing and growth, it is common knowledge that websites can be used as a beacon to expand the awareness and consideration of their services on the Web. Nevertheless, recent research results indicate the managerial difficulties in deploying strategies for expanding the discoverability, visibility, and accessibility of these websites. In this paper, a three-stage data-driven Search Engine Optimization schema is proposed to assess the performance of Libraries, Archives, and Museums websites (LAMs), thus helping administrators expand their discoverability, visibility, and accessibility within the Web realm. To do so, the authors examine the performance of 341 related websites from all over the world based on three different factors, Content Curation, Speed, and Security. In the first stage, a statistically reliable and consistent assessment schema for evaluating the SEO performance of LAMs websites through the integration of more than 30 variables is presented. Subsequently, the second stage involves a descriptive data summarization for initial performance estimations of the examined websites in each factor is taking place. In the third stage, predictive regression models are developed to understand and compare the SEO performance of three different Content Management Systems, namely the Drupal, WordPress, and custom approaches, that LAMs websites have adopted. The results of this study constitute a solid stepping-stone both for practitioners and researchers to adopt and improve such methods that focus on end-users and boost organizational structures and culture that relied on data-driven approaches for expanding the visibility of LAMs services. Full article
(This article belongs to the Special Issue Big Data Integration and Intelligent Information Integration)
Show Figures

Figure 1

41 pages, 3342 KiB  
Article
Towards Flexible Retrieval, Integration and Analysis of JSON Data Sets through Fuzzy Sets: A Case Study
by Paolo Fosci and Giuseppe Psaila
Information 2021, 12(7), 258; https://doi.org/10.3390/info12070258 - 22 Jun 2021
Cited by 12 | Viewed by 2052
Abstract
How to exploit the incredible variety of JSON data sets currently available on the Internet, for example, on Open Data portals? The traditional approach would require getting them from the portals, then storing them into some JSON document store and integrating them within [...] Read more.
How to exploit the incredible variety of JSON data sets currently available on the Internet, for example, on Open Data portals? The traditional approach would require getting them from the portals, then storing them into some JSON document store and integrating them within the document store. However, once data are integrated, the lack of a query language that provides flexible querying capabilities could prevent analysts from successfully completing their analysis. In this paper, we show how the J-CO Framework, a novel framework that we developed at the University of Bergamo (Italy) to manage large collections of JSON documents, is a unique and innovative tool that provides analysts with querying capabilities based on fuzzy sets over JSON data sets. Its query language, called J-CO-QL, is continuously evolving to increase potential applications; the most recent extensions give analysts the capability to retrieve data sets directly from web portals as well as constructs to apply fuzzy set theory to JSON documents and to provide analysts with the capability to perform imprecise queries on documents by means of flexible soft conditions. This paper presents a practical case study in which real data sets are retrieved, integrated and analyzed to effectively show the unique and innovative capabilities of the J-CO Framework. Full article
Show Figures

Figure 1

10 pages, 496 KiB  
Review
Survey of Smart Contract Framework and Its Application
by Edi Surya Negara, Achmad Nizar Hidayanto, Ria Andryani and Rezki Syaputra
Information 2021, 12(7), 257; https://doi.org/10.3390/info12070257 - 22 Jun 2021
Cited by 21 | Viewed by 4671
Abstract
This article is a literature review on smart contract applications in various domains. The aim is to investigate technological developments and implementation of smart contracts in various domains. For this purpose, the theoretical basis of various papers that have been published in recent [...] Read more.
This article is a literature review on smart contract applications in various domains. The aim is to investigate technological developments and implementation of smart contracts in various domains. For this purpose, the theoretical basis of various papers that have been published in recent years is used as a source of theoretical and implementation studies. Smart contracts are the latest technology that is developing in line with the development of blockchain technology. The literature review that we have carried out explains that smart contracts work automatically, control, or document legally relevant events and actions in accordance with the agreements set forth in the contract agreement. This technology is one of the newest technologies that is expected to provide solutions for trust, security, and transparency in various domains. This literature review was conducted using an exploratory approach. This literature review focuses on reviewing frameworks, methods, and simulations of smart contract implementations in various domains. Full article
(This article belongs to the Section Review)
Show Figures

Figure 1

Previous Issue
Next Issue
Back to TopTop