Next Issue
Volume 11, July
Previous Issue
Volume 11, May
 
 

Information, Volume 11, Issue 6 (June 2020) – 58 articles

Cover Story (view full-size image): The paper proposes a methodology for forecasting the movements of analysts' net income estimates and those of stock prices. We achieve this by applying natural language processing and neural networks in the context of analyst reports. In the pre-experiment, we applied our method to extract opinion sentences from the analyst report while classifying the remaining parts as non-opinion sentences. Then, we performed two additional experiments. First, we employed our proposed method for forecasting the movements of analysts' net income estimates by inputting the opinion and non-opinion sentences into separate neural networks. In addition to the reports, we inputted the trend of the net income estimate to the networks. Second, we employed our proposed method to forecast the movements of stock prices. View this paper.
  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Reader to open them.
Order results
Result details
Section
Select all
Export citation of selected articles as:
11 pages, 3446 KiB  
Article
Standardization Procedure for Data Exchange
by Yoshiaki Fukami
Information 2020, 11(6), 339; https://doi.org/10.3390/info11060339 - 25 Jun 2020
Cited by 2 | Viewed by 2947
Abstract
Common specification of data promotes data exchange among many and unspecified individuals and organizations. However, standardization itself tends to discourage innovation that can create new uses of data. To overcome this dilemma of innovation and standardization, this paper analyzes and proposes hypotheses regarding [...] Read more.
Common specification of data promotes data exchange among many and unspecified individuals and organizations. However, standardization itself tends to discourage innovation that can create new uses of data. To overcome this dilemma of innovation and standardization, this paper analyzes and proposes hypotheses regarding the process through which the World Wide Web Consortium (W3C) has realized innovations such as web applications by updating the standard. I hypothesize the following changes in standardization process management at the W3C as key factors supporting innovation through standardization among stakeholders with conflicting interests: (1) defining the scope of the specifications to be developed according to functions instead of technical structures; (2) design of a development management policy based on feedback from implementations, referred to as an “implementation-oriented policy”; (3) inclusion of diversified stakeholders in open standardization processes that facilitate consensus formation and the diffusion of developed standards; and (4) adopting a royalty-free to encourage third-party developers to implement proposed specifications and advance update of proposals. This single case analysis leads to the development and diffusion of common technological data specifications, which are the driving factors for innovation utilizing big data generated by exchanging data of various origins. Full article
(This article belongs to the Special Issue CDEC: Cross-disciplinary Data Exchange and Collaboration)
Show Figures

Figure 1

35 pages, 5825 KiB  
Article
Linking Theories, Past Practices, and Archaeological Remains of Movement through Ontological Reasoning
by Laure Nuninger, Philip Verhagen, Thérèse Libourel, Rachel Opitz, Xavier Rodier, Clément Laplaige, Catherine Fruchart, Samuel Leturcq and Nathanael Levoguer
Information 2020, 11(6), 338; https://doi.org/10.3390/info11060338 - 24 Jun 2020
Cited by 12 | Viewed by 4895
Abstract
The amount of information available to archaeologists has grown dramatically during the last ten years. The rapid acquisition of observational data and creation of digital data has played a significant role in this “information explosion”. In this paper, we propose new methods for [...] Read more.
The amount of information available to archaeologists has grown dramatically during the last ten years. The rapid acquisition of observational data and creation of digital data has played a significant role in this “information explosion”. In this paper, we propose new methods for knowledge creation in studies of movement, designed for the present data-rich research context. Using three case studies, we analyze how researchers have identified, conceptualized, and linked the material traces describing various movement processes in a given region. Then, we explain how we construct ontologies that enable us to explicitly relate material elements, identified in the observed landscape, to the knowledge or theory that explains their role and relationships within the movement process. Combining formal pathway systems and informal movement systems through these three case studies, we argue that these systems are not hierarchically integrated, but rather intertwined. We introduce a new heuristic tool, the “track graph”, to record observed material features in a neutral form which can be employed to reconstruct the trajectories of journeys which follow different movement logics. Finally, we illustrate how the breakdown of implicit conceptual references into explicit, logical chains of reasoning, describing basic entities and their relationships, allows the use of these constituent elements to reconstruct, analyze, and compare movement practices from the bottom up. Full article
(This article belongs to the Special Issue Digital Humanities)
Show Figures

Figure 1

21 pages, 2236 KiB  
Article
Modeling Web Client and System Behavior
by Tomasz Rak
Information 2020, 11(6), 337; https://doi.org/10.3390/info11060337 - 24 Jun 2020
Cited by 5 | Viewed by 2963
Abstract
Web systems are becoming more and more popular. An efficiently working network system is the basis for the functioning of every enterprise. Performance models are powerful tools for performance prediction. The creation of performance models requires significant effort. In the article, we want [...] Read more.
Web systems are becoming more and more popular. An efficiently working network system is the basis for the functioning of every enterprise. Performance models are powerful tools for performance prediction. The creation of performance models requires significant effort. In the article, we want to present various performance models of customer and Web systems. In particular, we want to examine a system behaviour related to different flow routes of clients in the system. Therefore we propose Queueing Petri Nets, the new modeling methodology for dealing with performance issues of production systems. We follow the simulation-based approach. We consider 25 different models to check performance. Then we evaluate them based on the proposed metrics. The validation results show that the model is able to predict the performance with a relative error lower than 20%. Our evaluation shows that prepared models can reduce the effort of production system preparation. The resulting performance model can predict the system behaviour in a particular layer at the indicated load. Full article
(This article belongs to the Special Issue Data Analytics and Consumer Behavior)
Show Figures

Figure 1

11 pages, 1019 KiB  
Article
Information and Communication Technology-Enhanced Business and Managerial Communication in SMEs in the Czech Republic
by Marcel Pikhart and Blanka Klimova
Information 2020, 11(6), 336; https://doi.org/10.3390/info11060336 - 24 Jun 2020
Cited by 4 | Viewed by 5691
Abstract
Current managerial communication in the global business world has recently experienced dramatic and unprecedented changes connected to the use of Information and Communication Technology (ICT) in business and managerial communication. The objective of this paper is to analyze the changes in ICT-enhanced business [...] Read more.
Current managerial communication in the global business world has recently experienced dramatic and unprecedented changes connected to the use of Information and Communication Technology (ICT) in business and managerial communication. The objective of this paper is to analyze the changes in ICT-enhanced business and managerial communication in Small and Medium Enterprises (SMEs) in the Czech Republic. The use of ICT in business and managerial communication is obvious and brings various benefits, but it also has some drawbacks that should be identified and analyzed. From a methodological point of view, this study is twofold. Firstly, we conduct a systematic review of the current literature on the topic of business and managerial communication, providing an understanding of the recent development in the area of business and managerial communication. Secondly, we conduct qualitative research into the current state of ICT-enhanced managerial and business communication in several SMEs in the Czech Republic. The findings of the literature research show that there are two key aspects that define modern business and managerial communication, i.e., interculturality and interconnectedness. These two aspects of business and managerial communication are very recent, and they bring many challenges that must be considered in order to optimize communication. These altered communication paradigms have the potential to improve global competitiveness and produce new opportunities in the global market. The second part of the research shows that the general awareness of the changes in business communication is limited, and this could potentially pose a threat to business and managerial communication, leading to a loss of opportunities and reduced competitiveness. The majority of global-based companies have already become culture-, communication-, technology- and information-dependent, and ignoring or neglecting this fact presents a significant risk, which may be one of the biggest threats to global competitiveness. Since the success of SMEs is critical for the development of the national economy, it is recommended that company communication be continuously enhanced by frequent training at all organizational levels. This presents a challenge for educational institutions and training centers, managers and businesspeople, of creating communication competencies that would be highly rewarded in the global business environment. Full article
(This article belongs to the Special Issue ICT Enhanced Social Sciences and Humanities)
Show Figures

Figure 1

12 pages, 534 KiB  
Article
Get out of Church! The Case of #EmptyThePews: Twitter Hashtag between Resistance and Community
by Ruth Tsuria
Information 2020, 11(6), 335; https://doi.org/10.3390/info11060335 - 23 Jun 2020
Cited by 4 | Viewed by 5395
Abstract
This study explores the relationship between politics and religion, resistance and community, on social media through the case study of #EmptyThePews. #EmptyThePews was created in August 2017 after the events in Charlottesville, calling users who attend Trump-supporting churches to leave those churches as [...] Read more.
This study explores the relationship between politics and religion, resistance and community, on social media through the case study of #EmptyThePews. #EmptyThePews was created in August 2017 after the events in Charlottesville, calling users who attend Trump-supporting churches to leave those churches as a form of protest. What starts out as a call of action, becomes a polysemic online signifier for sharing stories of religious abuse, and thus a format for identity and community construction. An analysis of 250 tweets with #EmptyThePews revels five different uses of the hashtag, including highlighting racial, gender, and sexual identity-based discrimination; sharing stories of religious or sexual abuse; constructing a community and identity; and actively calling for people to empty churches. This Twitter hashtag did not facilitate an active movement of people leaving churches, but instead created a Twitter community. Giving voice and space to this community, however, can be seen as a form of resistance. Full article
(This article belongs to the Special Issue Cultural Studies of Digital Society)
Show Figures

Figure 1

24 pages, 794 KiB  
Concept Paper
Precursors of Role-Based Access Control Design in KMS: A Conceptual Framework
by Gabriel Nyame and Zhiguang Qin
Information 2020, 11(6), 334; https://doi.org/10.3390/info11060334 - 22 Jun 2020
Cited by 3 | Viewed by 4304
Abstract
Role-based access control (RBAC) continues to gain popularity in the management of authorization concerning access to knowledge assets in organizations. As a socio-technical concept, the notion of role in RBAC has been overemphasized, while very little attention is given to the precursors: role [...] Read more.
Role-based access control (RBAC) continues to gain popularity in the management of authorization concerning access to knowledge assets in organizations. As a socio-technical concept, the notion of role in RBAC has been overemphasized, while very little attention is given to the precursors: role strain, role ambiguity, and role conflict. These constructs provide more significant insights into RBAC design in Knowledge Management Systems (KMS). KMS is the technology-based knowledge management tool used to acquire, store, share, and apply knowledge for improved collaboration and knowledge-value creation. In this paper, we propose eight propositions that require future research concerning the RBAC system for knowledge security. In addition, we propose a model that integrates these precursors and RBAC to deepen the understanding of these constructs. Further, we examine these precursory constructs in a socio-technical fashion relative to RBAC in the organizational context and the status–role relationship effects. We carried out conceptual analysis and synthesis of the relevant literature, and present a model that involves the three essential precursors that play crucial roles in role mining and engineering in RBAC design. Using an illustrative case study of two companies where 63 IT professionals participated in the study, the study established that the precursors positively and significantly increase the intractability of the RBAC system design. Our framework draws attention to both the management of organizations and RBAC system developers about the need to consider and analyze the precursors thoroughly before initiating the processes of policy engineering, role mining, and role engineering. The propositions stated in this study are important considerations for future work. Full article
(This article belongs to the Section Information Systems)
Show Figures

Figure 1

19 pages, 3107 KiB  
Article
Exploring Technology Influencers from Patent Data Using Association Rule Mining and Social Network Analysis
by Pranomkorn Ampornphan and Sutep Tongngam
Information 2020, 11(6), 333; https://doi.org/10.3390/info11060333 - 22 Jun 2020
Cited by 27 | Viewed by 4794
Abstract
A patent is an important document issued by the government to protect inventions or product design. Inventions consist of mechanical structures, production processes, quality improvements of products, and so on. Generally, goods or appliances in everyday life are a result of an invention [...] Read more.
A patent is an important document issued by the government to protect inventions or product design. Inventions consist of mechanical structures, production processes, quality improvements of products, and so on. Generally, goods or appliances in everyday life are a result of an invention or product design that has been published in patent documents. A new invention contributes to the standard of living, improves productivity and quality, reduces production costs for industry, or delivers products with higher added value. Patent documents are considered to be excellent sources of knowledge in a particular field of technology, leading to inventions. Technology trend forecasting from patent documents depends on the subjective experience of experts. However, accumulated patent documents consist of a huge amount of text data, making it more difficult for those experts to gain knowledge precisely and promptly. Therefore, technology trend forecasting using objective methods is more feasible. There are many statistical methods applied to patent analysis, for example, technology overview, investment volume, and the technology life cycle. There are also data mining methods by which patent documents can be classified, such as by technical characteristics, to support business decision-making. The main contribution of this study is to apply data mining methods and social network analysis to gain knowledge in emerging technologies and find informative technology trends from patent data. We experimented with our techniques on data retrieved from the European Patent Office (EPO) website. The technique includes K-means clustering, text mining, and association rule mining methods. The patent data analyzed include the International Patent Classification (IPC) code and patent titles. Association rule mining was applied to find associative relationships among patent data, then combined with social network analysis (SNA) to further analyze technology trends. SNA provided metric measurements to explore the most influential technology as well as visualize data in various network layouts. The results showed emerging technology clusters, their meaningful patterns, and a network structure, and suggested information for the development of technologies and inventions. Full article
(This article belongs to the Special Issue Computer Modelling in Decision Making (CMDM 2019))
Show Figures

Figure 1

21 pages, 5303 KiB  
Article
Evaluation of Tree-Based Ensemble Machine Learning Models in Predicting Stock Price Direction of Movement
by Ernest Kwame Ampomah, Zhiguang Qin and Gabriel Nyame
Information 2020, 11(6), 332; https://doi.org/10.3390/info11060332 - 20 Jun 2020
Cited by 92 | Viewed by 9426
Abstract
Forecasting the direction and trend of stock price is an important task which helps investors to make prudent financial decisions in the stock market. Investment in the stock market has a big risk associated with it. Minimizing prediction error reduces the investment risk. [...] Read more.
Forecasting the direction and trend of stock price is an important task which helps investors to make prudent financial decisions in the stock market. Investment in the stock market has a big risk associated with it. Minimizing prediction error reduces the investment risk. Machine learning (ML) models typically perform better than statistical and econometric models. Also, ensemble ML models have been shown in the literature to be able to produce superior performance than single ML models. In this work, we compare the effectiveness of tree-based ensemble ML models (Random Forest (RF), XGBoost Classifier (XG), Bagging Classifier (BC), AdaBoost Classifier (Ada), Extra Trees Classifier (ET), and Voting Classifier (VC)) in forecasting the direction of stock price movement. Eight different stock data from three stock exchanges (NYSE, NASDAQ, and NSE) are randomly collected and used for the study. Each data set is split into training and test set. Ten-fold cross validation accuracy is used to evaluate the ML models on the training set. In addition, the ML models are evaluated on the test set using accuracy, precision, recall, F1-score, specificity, and area under receiver operating characteristics curve (AUC-ROC). Kendall W test of concordance is used to rank the performance of the tree-based ML algorithms. For the training set, the AdaBoost model performed better than the rest of the models. For the test set, accuracy, precision, F1-score, and AUC metrics generated results significant to rank the models, and the Extra Trees classifier outperformed the other models in all the rankings. Full article
(This article belongs to the Special Issue Machine Learning on Scientific Data and Information)
Show Figures

Figure 1

11 pages, 597 KiB  
Article
How Managers Use Information Systems for Strategy Implementation in Agritourism SMEs
by Maria Kamariotou and Fotis Kitsios
Information 2020, 11(6), 331; https://doi.org/10.3390/info11060331 - 20 Jun 2020
Cited by 5 | Viewed by 3074
Abstract
Agritourism is long established and the interest in diversification of agricultural enterprises into tourism has increased. However, many challenges have emerged regarding the lack of appropriate skills, strategic planning, Information Systems (IS), as well as increased costs in production processes. In Greece, the [...] Read more.
Agritourism is long established and the interest in diversification of agricultural enterprises into tourism has increased. However, many challenges have emerged regarding the lack of appropriate skills, strategic planning, Information Systems (IS), as well as increased costs in production processes. In Greece, the contribution of agritourism to economic growth has increased in the last decade, but the relationship between the agricultural sector and agritourism is a reason for certain people having largely taken advantage from the development of tourism in a specific area. Greek businesses operating in the agritourism sector are seeking long-term sustainability. However, they lack strategic planning and effective use of IS. As the strategic implementation of Information Technology (IT) in Small Medium Enterprises (SMEs) in the agritourism sector is under-researched, this paper aims to investigate how the process of Strategic Information Systems Planning (SISP) affects the success of Greek SMEs in the agritourism industry. IS executives completed the survey and ANOVA analysis was used for data analysis. The findings of the paper indicate that IS executives do not focus on the analysis of the external environment and the evaluation of opportunities for IS development. In addition, the lack of formulation of the IT strategy creates inefficient and unsuccessful IT projects. Full article
(This article belongs to the Section Information Systems)
Show Figures

Figure 1

15 pages, 2058 KiB  
Article
Discovering Influential Positions in RFID-Based Indoor Tracking Data
by Ye Jin and Lizhen Cui
Information 2020, 11(6), 330; https://doi.org/10.3390/info11060330 - 20 Jun 2020
Cited by 3 | Viewed by 2507
Abstract
The rapid development of indoor localization techniques such as Wi-Fi and RFID makes it possible to obtain users’ position-tracking data in indoor space. Indoor position-tracking data, also known as indoor moving trajectories, offer many new opportunities to mine decision-making knowledge. In this paper, [...] Read more.
The rapid development of indoor localization techniques such as Wi-Fi and RFID makes it possible to obtain users’ position-tracking data in indoor space. Indoor position-tracking data, also known as indoor moving trajectories, offer many new opportunities to mine decision-making knowledge. In this paper, we study the detection of highly influential positions from indoor position-tracking data, e.g., to detect highly influential positions in a business center, or to detect the hottest shops in a shopping mall according to users’ indoor position-tracking data. We first describe three baseline solutions to this problem, which are count-based, density-based, and duration-based algorithms. Then, motivated by the H-index for evaluating the influence of an author or a journal in academia, we propose a new algorithm called H-Count, which evaluates the influence of an indoor position similarly to the H-index. We further present an improvement of the H-Count by taking a filtering step to remove unqualified position-tracking records. This is based on the observation that many visits to a position such as a gate are meaningless for the detection of influential indoor positions. Finally, we simulate 100 moving objects in a real building deployed with 94 RFID readers over 30 days to generate 223,564 indoor moving trajectories, and conduct experiments to compare our proposed H-Count and H-Count* with three baseline algorithms. The results show that H-Count outperforms all baselines and H-Count* can further improve the F-measure of the H-Count by 113% on average. Full article
(This article belongs to the Special Issue Indoor Navigation in Smart Cities)
Show Figures

Figure 1

18 pages, 3948 KiB  
Article
Classroom Attendance Systems Based on Bluetooth Low Energy Indoor Positioning Technology for Smart Campus
by Apiruk Puckdeevongs, N. K. Tripathi, Apichon Witayangkurn and Poompat Saengudomlert
Information 2020, 11(6), 329; https://doi.org/10.3390/info11060329 - 19 Jun 2020
Cited by 19 | Viewed by 11260
Abstract
Student attendance during classroom hours is important, because it impacts the academic performance of students. Consequently, several universities impose a minimum attendance percentage criterion for students to be allowed to attend examinations; therefore, recording student attendance is a vital task. Conventional methods for [...] Read more.
Student attendance during classroom hours is important, because it impacts the academic performance of students. Consequently, several universities impose a minimum attendance percentage criterion for students to be allowed to attend examinations; therefore, recording student attendance is a vital task. Conventional methods for recording student attendance in the classroom, such as roll-call and sign-in, are an inefficient use of instruction time and only increase teachers’ workloads. In this study, we propose a Bluetooth Low Energy-based student positioning framework for automatically recording student attendance in classrooms. The proposed architecture consists of two components, an indoor positioning framework within the classroom and student attendance registration. Experimental studies using our method show that the Received Signal Strength Indicator fingerprinting technique that is used in indoor scenarios can achieve satisfactory positioning accuracy, even in a classroom environment with typically high signal interference. We intentionally focused on designing a basic system with simple indoor devices based on ubiquitous Bluetooth technology and integrating an attendance system with computational techniques in order to minimize operational costs and complications. The proposed system is tested and demonstrated to be usable in a real classroom environment at Rangsit University, Thailand. Full article
(This article belongs to the Special Issue Indoor Navigation in Smart Cities)
Show Figures

Figure 1

13 pages, 3280 KiB  
Article
How Will Sense of Values and Preference Change during Art Appreciation?
by Akinori Abe, Kotaro Fukushima and Reina Kawada
Information 2020, 11(6), 328; https://doi.org/10.3390/info11060328 - 18 Jun 2020
Cited by 13 | Viewed by 3297
Abstract
We have conducted several experiments where various types of information offering strategies were performed. We obtained interesting phenomena from the results. The participants seemed to be able to gradually understand the artwork by offering information of the artwork. Of course, for an abstract [...] Read more.
We have conducted several experiments where various types of information offering strategies were performed. We obtained interesting phenomena from the results. The participants seemed to be able to gradually understand the artwork by offering information of the artwork. Of course, for an abstract art, the information of the artworks functions better understanding of artworks. Even for a representational painting, the quality and quantity of understanding was gradually changing. Thus, the information of art sometimes influences the art appreciation. In this paper, we will discuss how the value and preference of art will change according to offered information? In addition, we will discuss determining which factor (information) will change the viewers’ value and preference of art in the art appreciation. For that, we conducted two experiments, where information of the artwork was offered randomly (each person may obtain different information for the artswork). Additionally, for all the artworks, the information was offered in the same manner (all persons will obtain the same information for the artworks). The information involved title, painting materials, techniques, production year, name of artist, price, background, and theme of the artworks. Full article
(This article belongs to the Special Issue CDEC: Cross-disciplinary Data Exchange and Collaboration)
Show Figures

Figure 1

17 pages, 11719 KiB  
Article
Analyzing Service Quality Evaluation Indexes of Rural Last Mile Delivery Using FCE and ISM Approach
by Xiaohong Jiang, Huiying Wang and Xiucheng Guo
Information 2020, 11(6), 327; https://doi.org/10.3390/info11060327 - 18 Jun 2020
Cited by 10 | Viewed by 5137
Abstract
The advent of e-commerce has led to a rapid acceleration of rural logistics development in China. To enhance green and sustainable development of rural logistics, it is necessary to improve the service quality of the rural last mile delivery and analyze service quality [...] Read more.
The advent of e-commerce has led to a rapid acceleration of rural logistics development in China. To enhance green and sustainable development of rural logistics, it is necessary to improve the service quality of the rural last mile delivery and analyze service quality evaluation indexes. An integrated methodology combing fuzzy comprehensive evaluation (FCE) and the interpretative structural model (ISM) is presented in the current paper to reveal the relationship between the service quality evaluation indexes of the rural last mile delivery. A total of 18 logistics service quality evaluation indexes in five dimensions are selected. The FCE is used to measure the service quality of rural delivery in an empirical research area, and the weight of each evaluation index is assigned by regression analysis. The ISM is adopted to judge the hierarchical structure of indexes, and a five-layer hierarchy is obtained. The results show that it is necessary to first focus on improving the evaluation indexes of accuracy of goods arrival and timely customer service response. In the case of Shunfeng Express, the company needs to additionally improve the timeliness and rationality of damaged or lost processing goods. Some countermeasures and suggestions are put forward. The proposed integrated method helps to reveal the key service quality evaluation indexes and the areas needing improvement. The use of regression analysis within the FCE method allows the estimation of weights in a relatively objective way. This research provides theoretical support for improving the service quality and customers’ satisfaction of the rural last mile delivery, and enhancing the green and sustainable development of rural logistics. Full article
(This article belongs to the Special Issue Green Marketing)
Show Figures

Figure 1

20 pages, 1379 KiB  
Article
AndroDFA: Android Malware Classification Based on Resource Consumption
by Luca Massarelli, Leonardo Aniello, Claudio Ciccotelli, Leonardo Querzoni, Daniele Ucci and Roberto Baldoni
Information 2020, 11(6), 326; https://doi.org/10.3390/info11060326 - 16 Jun 2020
Cited by 7 | Viewed by 3694
Abstract
The vast majority of today’s mobile malware targets Android devices. An important task of malware analysis is the classification of malicious samples into known families. In this paper, we propose AndroDFA (DFA, detrended fluctuation analysis): an approach to Android malware family classification based [...] Read more.
The vast majority of today’s mobile malware targets Android devices. An important task of malware analysis is the classification of malicious samples into known families. In this paper, we propose AndroDFA (DFA, detrended fluctuation analysis): an approach to Android malware family classification based on dynamic analysis of resource consumption metrics available from the proc file system. These metrics can be easily measured during sample execution. From each malware, we extract features through detrended fluctuation analysis (DFA) and Pearson’s correlation, then a support vector machine is employed to classify malware into families. We provide an experimental evaluation based on malware samples from two datasets, namely Drebin and AMD. With the Drebin dataset, we obtained a classification accuracy of 82%, comparable with works from the state-of-the-art like DroidScribe. However, compared to DroidScribe, our approach is easier to reproduce because it is based on publicly available tools only, does not require any modification to the emulated environment or Android OS, and by design, can also be used on physical devices rather than exclusively on emulators. The latter is a key factor because modern mobile malware can detect the emulated environment and hide its malicious behavior. The experiments on the AMD dataset gave similar results, with an overall mean accuracy of 78%. Furthermore, we made the software we developed publicly available, to ease the reproducibility of our results. Full article
(This article belongs to the Special Issue New Frontiers in Android Malware Analysis and Detection)
Show Figures

Figure 1

17 pages, 1390 KiB  
Article
Signal Timing Optimization Model Based on Bus Priority
by Xu Sun, Kun Lin, Pengpeng Jiao and Huapu Lu
Information 2020, 11(6), 325; https://doi.org/10.3390/info11060325 - 15 Jun 2020
Cited by 6 | Viewed by 2675
Abstract
This paper focuses on the optimization problem of a signal timing design based on the concept of bus priority. This optimization problem is formulated in the form of a bi-level programming model that minimizes average passenger delay at intersections and vehicle delay in [...] Read more.
This paper focuses on the optimization problem of a signal timing design based on the concept of bus priority. This optimization problem is formulated in the form of a bi-level programming model that minimizes average passenger delay at intersections and vehicle delay in lanes simultaneously. A solution framework that implements the differential evolution (DE) algorithm is developed to efficiently solve the model. A case study based on a real-world intersection in Beijing, China, is implemented to test the efficiency and applicability of the proposed modeling and computing methods. The experiment’s result shows that the optimization model can not only significantly improve the priority capacity of the buses at the intersection but also reduce the adverse impact of bus-priority approaches on the private vehicles for the intersections. Full article
Show Figures

Figure 1

15 pages, 1670 KiB  
Article
Reliability Dynamic Analysis by Fault Trees and Binary Decision Diagrams
by Fausto Pedro García Márquez, Isaac Segovia Ramírez, Behnam Mohammadi-Ivatloo and Alberto Pliego Marugán
Information 2020, 11(6), 324; https://doi.org/10.3390/info11060324 - 15 Jun 2020
Cited by 47 | Viewed by 3827
Abstract
New wind turbines are becoming more complex and reliability analysis of them rising in complexity. The systems are composed of many components. Fault tree is used as an useful tool to analyze these interrelations and provide a scheme of the wind turbine, to [...] Read more.
New wind turbines are becoming more complex and reliability analysis of them rising in complexity. The systems are composed of many components. Fault tree is used as an useful tool to analyze these interrelations and provide a scheme of the wind turbine, to get a quick overview of the behavior of the system under certain conditions of the components. However, it is complicated and in some cases not possible, to identify the conditions that would generate a wind turbine failure. A quantitative and qualitative reliability analysis of the wind turbine is proposed in this study. Binary decision diagrams are employed as a suitable and operational method to facilitate this analysis and to get an analytical expression by the Boolean functions. The size of the binary decision diagram, i.e., the computational cost for solving the problem, has an important dependence on the order of the components or events considered. Different heuristic ranking methods are used to find an optimal order or one closed, and to validate the results: AND, level, top-down-left-right, deep-first search and breadth-first-search. Birnbaum and criticality importance measures are proposed to evaluate the relevance of each component. This analysis leads to classify the events according to their importance with respect to the probability of the top event. This analysis provides the basis for making medium and long-term maintenance strategies. Full article
Show Figures

Figure 1

14 pages, 4787 KiB  
Article
GAP: Geometric Aggregation of Popularity Metrics
by Christos Koutlis, Manos Schinas, Symeon Papadopoulos and Ioannis Kompatsiaris
Information 2020, 11(6), 323; https://doi.org/10.3390/info11060323 - 15 Jun 2020
Cited by 1 | Viewed by 3095
Abstract
Estimating and analyzing the popularity of an entity is an important task for professionals in several areas, e.g., music, social media, and cinema. Furthermore, the ample availability of online data should enhance our insights into the collective consumer behavior. However, effectively modeling popularity [...] Read more.
Estimating and analyzing the popularity of an entity is an important task for professionals in several areas, e.g., music, social media, and cinema. Furthermore, the ample availability of online data should enhance our insights into the collective consumer behavior. However, effectively modeling popularity and integrating diverse data sources are very challenging problems with no consensus on the optimal approach to tackle them. To this end, we propose a non-linear method for popularity metric aggregation based on geometrical shapes derived from the individual metrics’ values, termed Geometric Aggregation of Popularity metrics (GAP). In this work, we particularly focus on the estimation of artist popularity by aggregating web-based artist popularity metrics. Finally, even though the most natural choice for metric aggregation would be a linear model, our approach leads to stronger rank correlation and non-linear correlation scores compared to linear aggregation schemes. More precisely, our approach outperforms the simple average method in five out of seven evaluation measures. Full article
Show Figures

Figure 1

15 pages, 1086 KiB  
Article
Mutual Information Loss in Pyramidal Image Processing
by Jerry Gibson and Hoontaek Oh
Information 2020, 11(6), 322; https://doi.org/10.3390/info11060322 - 15 Jun 2020
Cited by 5 | Viewed by 2655
Abstract
Gaussian and Laplacian pyramids have long been important for image analysis and compression. More recently, multiresolution pyramids have become an important component of machine learning and deep learning for image analysis and image recognition. Constructing Gaussian and Laplacian pyramids consists of a series [...] Read more.
Gaussian and Laplacian pyramids have long been important for image analysis and compression. More recently, multiresolution pyramids have become an important component of machine learning and deep learning for image analysis and image recognition. Constructing Gaussian and Laplacian pyramids consists of a series of filtering, decimation, and differencing operations, and the quality indicator is usually mean squared reconstruction error in comparison to the original image. We present a new characterization of the information loss in a Gaussian pyramid in terms of the change in mutual information. More specifically, we show that one half the log ratio of entropy powers between two stages in a Gaussian pyramid is equal to the difference in mutual information between these two stages. We show that this relationship holds for a wide variety of probability distributions and present several examples of analyzing Gaussian and Laplacian pyramids for different images. Full article
Show Figures

Figure 1

20 pages, 2186 KiB  
Benchmark
A Controlled Benchmark of Video Violence Detection Techniques
by Nicola Convertini, Vincenzo Dentamaro, Donato Impedovo, Giuseppe Pirlo and Lucia Sarcinella
Information 2020, 11(6), 321; https://doi.org/10.3390/info11060321 - 13 Jun 2020
Cited by 8 | Viewed by 3282
Abstract
This benchmarking study aims to examine and discuss the current state-of-the-art techniques for in-video violence detection, and also provide benchmarking results as a reference for the future accuracy baseline of violence detection systems. In this paper, the authors review 11 techniques for in-video [...] Read more.
This benchmarking study aims to examine and discuss the current state-of-the-art techniques for in-video violence detection, and also provide benchmarking results as a reference for the future accuracy baseline of violence detection systems. In this paper, the authors review 11 techniques for in-video violence detection. They re-implement five carefully chosen state-of-the-art techniques over three different and publicly available violence datasets, using several classifiers, all in the same conditions. The main contribution of this work is to compare feature-based violence detection techniques and modern deep-learning techniques, such as Inception V3. Full article
Show Figures

Figure 1

14 pages, 535 KiB  
Article
Modeling Word Learning and Processing with Recurrent Neural Networks
by Claudia Marzi
Information 2020, 11(6), 320; https://doi.org/10.3390/info11060320 - 13 Jun 2020
Cited by 1 | Viewed by 2878
Abstract
The paper focuses on what two different types of Recurrent Neural Networks, namely a recurrent Long Short-Term Memory and a recurrent variant of self-organizing memories, a Temporal Self-Organizing Map, can tell us about speakers’ learning and processing a set of fully inflected verb [...] Read more.
The paper focuses on what two different types of Recurrent Neural Networks, namely a recurrent Long Short-Term Memory and a recurrent variant of self-organizing memories, a Temporal Self-Organizing Map, can tell us about speakers’ learning and processing a set of fully inflected verb forms selected from the top-frequency paradigms of Italian and German. Both architectures, due to the re-entrant layer of temporal connectivity, can develop a strong sensitivity to sequential patterns that are highly attested in the training data. The main goal is to evaluate learning and processing dynamics of verb inflection data in the two neural networks by focusing on the effects of morphological structure on word production and word recognition, as well as on word generalization for untrained verb forms. For both models, results show that production and recognition, as well as generalization, are facilitated for verb forms in regular paradigms. However, the two models are differently influenced by structural effects, with the Temporal Self-Organizing Map more prone to adaptively find a balance between processing issues of learnability and generalization, on the one side, and discriminability on the other side. Full article
(This article belongs to the Special Issue Advances in Computational Linguistics)
Show Figures

Figure 1

26 pages, 423 KiB  
Article
A Reliable Weighting Scheme for the Aggregation of Crowd Intelligence to Detect Fake News
by Franklin Tchakounté, Ahmadou Faissal, Marcellin Atemkeng and Achille Ntyam
Information 2020, 11(6), 319; https://doi.org/10.3390/info11060319 - 12 Jun 2020
Cited by 14 | Viewed by 5277
Abstract
Social networks play an important role in today’s society and in our relationships with others. They give the Internet user the opportunity to play an active role, e.g., one can relay certain information via a blog, a comment, or even a vote. The [...] Read more.
Social networks play an important role in today’s society and in our relationships with others. They give the Internet user the opportunity to play an active role, e.g., one can relay certain information via a blog, a comment, or even a vote. The Internet user has the possibility to share any content at any time. However, some malicious Internet users take advantage of this freedom to share fake news to manipulate or mislead an audience, to invade the privacy of others, and also to harm certain institutions. Fake news seeks to resemble traditional media to establish its credibility with the public. Its seriousness pushes the public to share them. As a result, fake news can spread quickly. This fake news can cause enormous difficulties for users and institutions. Several authors have proposed systems to detect fake news in social networks using crowd signals through the process of crowdsourcing. Unfortunately, these authors do not use the expertise of the crowd and the expertise of a third party in an associative way to make decisions. Crowds are useful in indicating whether or not a story should be fact-checked. This work proposes a new method of binary aggregation of opinions of the crowd and the knowledge of a third-party expert. The aggregator is based on majority voting on the crowd side and weighted averaging on the third-party side. An experimentation has been conducted on 25 posts and 50 voters. A quantitative comparison with the majority vote model reveals that our aggregation model provides slightly better results due to weights assigned to accredited users. A qualitative investigation against existing aggregation models shows that the proposed approach meets the requirements or properties expected of a crowdsourcing system and a voting system. Full article
(This article belongs to the Special Issue Tackling Misinformation Online)
Show Figures

Figure 1

20 pages, 4915 KiB  
Article
HMIC: Hierarchical Medical Image Classification, A Deep Learning Approach
by Kamran Kowsari, Rasoul Sali , Lubaina Ehsan, William Adorno , Asad Ali, Sean Moore, Beatrice Amadi, Paul Kelly, Sana Syed and Donald Brown
Information 2020, 11(6), 318; https://doi.org/10.3390/info11060318 - 12 Jun 2020
Cited by 36 | Viewed by 7059
Abstract
Image classification is central to the big data revolution in medicine. Improved information processing methods for diagnosis and classification of digital medical images have shown to be successful via deep learning approaches. As this field is explored, there are limitations to the performance [...] Read more.
Image classification is central to the big data revolution in medicine. Improved information processing methods for diagnosis and classification of digital medical images have shown to be successful via deep learning approaches. As this field is explored, there are limitations to the performance of traditional supervised classifiers. This paper outlines an approach that is different from the current medical image classification tasks that view the issue as multi-class classification. We performed a hierarchical classification using our Hierarchical Medical Image classification (HMIC) approach. HMIC uses stacks of deep learning models to give particular comprehension at each level of the clinical picture hierarchy. For testing our performance, we use biopsy of the small bowel images that contain three categories in the parent level (Celiac Disease, Environmental Enteropathy, and histologically normal controls). For the child level, Celiac Disease Severity is classified into 4 classes (I, IIIa, IIIb, and IIIC). Full article
(This article belongs to the Section Information Processes)
Show Figures

Figure 1

21 pages, 362 KiB  
Review
Recommender Systems Based on Collaborative Filtering Using Review Texts—A Survey
by Mehdi Srifi, Ahmed Oussous, Ayoub Ait Lahcen and Salma Mouline
Information 2020, 11(6), 317; https://doi.org/10.3390/info11060317 - 12 Jun 2020
Cited by 69 | Viewed by 9908
Abstract
In e-commerce websites and related micro-blogs, users supply online reviews expressing their preferences regarding various items. Such reviews are typically in the textual comments form, and account for a valuable information source about user interests. Recently, several works have used review texts and [...] Read more.
In e-commerce websites and related micro-blogs, users supply online reviews expressing their preferences regarding various items. Such reviews are typically in the textual comments form, and account for a valuable information source about user interests. Recently, several works have used review texts and their related rich information like review words, review topics and review sentiments, for improving the rating-based collaborative filtering recommender systems. These works vary from one another on how they exploit the review texts for deriving user interests. This paper provides a detailed survey of recent works that integrate review texts and also discusses how these review texts are exploited for addressing some main issues of standard collaborative filtering algorithms. Full article
(This article belongs to the Section Review)
20 pages, 11672 KiB  
Article
Gamified Evaluation in STEAM for Higher Education: A Case Study
by Pavel Boytchev and Svetla Boytcheva
Information 2020, 11(6), 316; https://doi.org/10.3390/info11060316 - 11 Jun 2020
Cited by 13 | Viewed by 4081
Abstract
The process of converting non-game educational content and processes into game-like educational content and processes is called gamification. This article describes a gamified evaluation software for university students in Science, Technology, Engineering, the Arts and Mathematics (STEAM) courses, based on competence profiles of [...] Read more.
The process of converting non-game educational content and processes into game-like educational content and processes is called gamification. This article describes a gamified evaluation software for university students in Science, Technology, Engineering, the Arts and Mathematics (STEAM) courses, based on competence profiles of students and problems. The traditional learning management systems and learning tools cannot handle gamification to its full potential because of the unique requirements of gamified environments. We designed a novel gamification evaluation and assessment methodology implemented in a STEAM course through specially designed software. The results from end-user tests show a positive expectation of students’ performance and motivation. The preliminary results of over 100 students in the Fundamentals of Computer Graphics course are presented and the results of quantitative analysis are discussed. In addition we present an analysis of students’ surveys, where students expressed in free text form observations about the software. Full article
(This article belongs to the Special Issue Cloud Gamification 2019)
Show Figures

Figure 1

13 pages, 1115 KiB  
Article
Ensemble-Based Online Machine Learning Algorithms for Network Intrusion Detection Systems Using Streaming Data
by Nathan Martindale, Muhammad Ismail and Douglas A. Talbert
Information 2020, 11(6), 315; https://doi.org/10.3390/info11060315 - 11 Jun 2020
Cited by 23 | Viewed by 4825
Abstract
As new cyberattacks are launched against systems and networks on a daily basis, the ability for network intrusion detection systems to operate efficiently in the big data era has become critically important, particularly as more low-power Internet-of-Things (IoT) devices enter the market. This [...] Read more.
As new cyberattacks are launched against systems and networks on a daily basis, the ability for network intrusion detection systems to operate efficiently in the big data era has become critically important, particularly as more low-power Internet-of-Things (IoT) devices enter the market. This has motivated research in applying machine learning algorithms that can operate on streams of data, trained online or “live” on only a small amount of data kept in memory at a time, as opposed to the more classical approaches that are trained solely offline on all of the data at once. In this context, one important concept from machine learning for improving detection performance is the idea of “ensembles”, where a collection of machine learning algorithms are combined to compensate for their individual limitations and produce an overall superior algorithm. Unfortunately, existing research lacks proper performance comparison between homogeneous and heterogeneous online ensembles. Hence, this paper investigates several homogeneous and heterogeneous ensembles, proposes three novel online heterogeneous ensembles for intrusion detection, and compares their performance accuracy, run-time complexity, and response to concept drifts. Out of the proposed novel online ensembles, the heterogeneous ensemble consisting of an adaptive random forest of Hoeffding Trees combined with a Hoeffding Adaptive Tree performed the best, by dealing with concept drift in the most effective way. While this scheme is less accurate than a larger size adaptive random forest, it offered a marginally better run-time, which is beneficial for online training. Full article
(This article belongs to the Special Issue Machine Learning for Cyber-Physical Security)
Show Figures

Figure 1

22 pages, 3544 KiB  
Article
COVID-19 Public Sentiment Insights and Machine Learning for Tweets Classification
by Jim Samuel, G. G. Md. Nawaz Ali, Md. Mokhlesur Rahman, Ek Esawi and Yana Samuel
Information 2020, 11(6), 314; https://doi.org/10.3390/info11060314 - 11 Jun 2020
Cited by 289 | Viewed by 23684
Abstract
Along with the Coronavirus pandemic, another crisis has manifested itself in the form of mass fear and panic phenomena, fueled by incomplete and often inaccurate information. There is therefore a tremendous need to address and better understand COVID-19’s informational crisis and gauge public [...] Read more.
Along with the Coronavirus pandemic, another crisis has manifested itself in the form of mass fear and panic phenomena, fueled by incomplete and often inaccurate information. There is therefore a tremendous need to address and better understand COVID-19’s informational crisis and gauge public sentiment, so that appropriate messaging and policy decisions can be implemented. In this research article, we identify public sentiment associated with the pandemic using Coronavirus specific Tweets and R statistical software, along with its sentiment analysis packages. We demonstrate insights into the progress of fear-sentiment over time as COVID-19 approached peak levels in the United States, using descriptive textual analytics supported by necessary textual data visualizations. Furthermore, we provide a methodological overview of two essential machine learning (ML) classification methods, in the context of textual analytics, and compare their effectiveness in classifying Coronavirus Tweets of varying lengths. We observe a strong classification accuracy of 91% for short Tweets, with the Naïve Bayes method. We also observe that the logistic regression classification method provides a reasonable accuracy of 74% with shorter Tweets, and both methods showed relatively weaker performance for longer Tweets. This research provides insights into Coronavirus fear sentiment progression, and outlines associated methods, implications, limitations and opportunities. Full article
(This article belongs to the Section Information Applications)
Show Figures

Figure 1

24 pages, 4064 KiB  
Review
Applications of Nonlinear Programming to the Optimization of Fractionated Protocols in Cancer Radiotherapy
by Alessandro Bertuzzi, Federica Conte, Federico Papa and Carmela Sinisgalli
Information 2020, 11(6), 313; https://doi.org/10.3390/info11060313 - 10 Jun 2020
Cited by 3 | Viewed by 4408
Abstract
The present work of review collects and evidences the main results of our previous papers on the optimization of fractionated radiotherapy protocols. The problem under investigation is presented here in a unitary framework as a nonlinear programming application that aims to determine the [...] Read more.
The present work of review collects and evidences the main results of our previous papers on the optimization of fractionated radiotherapy protocols. The problem under investigation is presented here in a unitary framework as a nonlinear programming application that aims to determine the optimal schemes of dose fractionation commonly used in external beam radiotherapy. The radiation responses of tumor and normal tissues are described by means of the linear quadratic model. We formulate a nonlinear, non-convex optimization problem including two quadratic constraints to limit the collateral normal tissue damages and linear box constraints on the fractional dose sizes. The general problem is decomposed into two subproblems: (1) analytical determination of the optimal fraction dose sizes as a function of the model parameters for arbitrarily fixed treatment lengths; and (2) numerical determination of the optimal fraction number, and of the optimal treatment time, in different parameter settings. After establishing the boundedness of the optimal number of fractions, we investigate by numerical simulation the optimal solution behavior for experimentally meaningful parameter ranges, recognizing the crucial role of some parameters, such as the radiosensitivity ratio, in determining the optimality of hypo- or equi-fractionated treatments. Our results agree with findings of the theoretical and clinical literature. Full article
(This article belongs to the Special Issue New Frontiers for Optimal Control Applications)
Show Figures

Figure 1

19 pages, 600 KiB  
Article
Malicious Text Identification: Deep Learning from Public Comments and Emails
by Asma Baccouche, Sadaf Ahmed, Daniel Sierra-Sosa and Adel Elmaghraby
Information 2020, 11(6), 312; https://doi.org/10.3390/info11060312 - 10 Jun 2020
Cited by 27 | Viewed by 6786
Abstract
Identifying internet spam has been a challenging problem for decades. Several solutions have succeeded to detect spam comments in social media or fraudulent emails. However, an adequate strategy for filtering messages is difficult to achieve, as these messages resemble real communications. From the [...] Read more.
Identifying internet spam has been a challenging problem for decades. Several solutions have succeeded to detect spam comments in social media or fraudulent emails. However, an adequate strategy for filtering messages is difficult to achieve, as these messages resemble real communications. From the Natural Language Processing (NLP) perspective, Deep Learning models are a good alternative for classifying text after being preprocessed. In particular, Long Short-Term Memory (LSTM) networks are one of the models that perform well for the binary and multi-label text classification problems. In this paper, an approach merging two different data sources, one intended for Spam in social media posts and the other for Fraud classification in emails, is presented. We designed a multi-label LSTM model and trained it on the joint datasets including text with common bigrams, extracted from each independent dataset. The experiment results show that our proposed model is capable of identifying malicious text regardless of the source. The LSTM model trained with the merged dataset outperforms the models trained independently on each dataset. Full article
(This article belongs to the Special Issue Tackling Misinformation Online)
Show Figures

Figure 1

18 pages, 1562 KiB  
Article
The Importance of Trust in Knowledge Sharing and the Efficiency of Doing Business on the Example of Tourism
by Elżbieta Kacperska and Katarzyna Łukasiewicz
Information 2020, 11(6), 311; https://doi.org/10.3390/info11060311 - 10 Jun 2020
Cited by 8 | Viewed by 5768
Abstract
The ability to share knowledge in an organization may determine its success. Knowledge is one of the basic resources of an enterprise, being also the basis for undertaking various types of strategic actions. Knowledge management should be focused in the organization on such [...] Read more.
The ability to share knowledge in an organization may determine its success. Knowledge is one of the basic resources of an enterprise, being also the basis for undertaking various types of strategic actions. Knowledge management should be focused in the organization on such processing of all available information to lead to the creation of value defined by employees of the organization and by customers. In order to raise the issue of knowledge sharing, trust should be mentioned. Trust is a factor conditioning effective atmosphere and cooperation in an organization. The main purpose of the article is to present the relationship between trust and knowledge sharing, taking into account the importance of this issue in the efficiency of doing business. To formulate conclusions, data from surveys carried out in 148 different tourist facilities were used. Data were collected by applying the diagnostic survey method and by using a survey technique based on a prepared questionnaire. The results showed that trust is important in sharing knowledge and was found to play an important role in achieving a high level of performance efficiency. The study consists of an introduction, literature review, research results and discussion of results. At the end of the article, conclusions, restrictions and recommendations for future research are presented. Full article
Show Figures

Figure 1

11 pages, 3210 KiB  
Article
Position Control of Cable-Driven Robotic Soft Arm Based on Deep Reinforcement Learning
by Qiuxuan Wu, Yueqin Gu, Yancheng Li, Botao Zhang, Sergey A. Chepinskiy, Jian Wang, Anton A. Zhilenkov, Aleksandr Y. Krasnov and Sergei Chernyi
Information 2020, 11(6), 310; https://doi.org/10.3390/info11060310 - 08 Jun 2020
Cited by 22 | Viewed by 4713
Abstract
The cable-driven soft arm is mostly made of soft material; it is difficult to control because of the material characteristics, so the traditional robot arm modeling and control methods cannot be directly applied to the soft robot arm. In this paper, we combine [...] Read more.
The cable-driven soft arm is mostly made of soft material; it is difficult to control because of the material characteristics, so the traditional robot arm modeling and control methods cannot be directly applied to the soft robot arm. In this paper, we combine the data-driven modeling method with the reinforcement learning control method to realize the position control task of robotic soft arm, the method of control strategy based on deep Q learning. In order to solve slow convergence and unstable effect in the process of simulation and migration when deep reinforcement learning is applied to the actual robot control task, a control strategy learning method is designed, which is based on the experimental data, to establish a simulation environment for control strategy training, and then applied to the real environment. Finally, it is proved by experiment that the method can effectively complete the control of the soft robot arm, which has better robustness than the traditional method. Full article
Show Figures

Figure 1

Previous Issue
Next Issue
Back to TopTop