Next Issue
Volume 10, August
Previous Issue
Volume 10, June
 
 

Information, Volume 10, Issue 7 (July 2019) – 24 articles

Cover Story (view full-size image): Data locality is key to improving the performance of big data processing. Apache Hadoop, which can handle big data by sending an analysis code to data nodes to reduce data movement on a network, So, is a fundamental big data processing technology. Four Hadoop modules, Hadoop Distributed File System (HDFS), Yet Another Resource Negotiator and MapReduce, have been investigated to improve the performance of data processing on Hadoop. Among the four modules, HDFS showed the greatest increase in data locality on Hadoop. In particular, the deep data locality fully maximizes the data locality on HDFS by pre-assigning and pre-allocating data blocks on HDFS. View this paper
  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Reader to open them.
Order results
Result details
Section
Select all
Export citation of selected articles as:
18 pages, 1334 KiB  
Article
Task Assignment Algorithm Based on Trust in Volunteer Computing Platforms
by Ling Xu, Jianzhong Qiao, Shukuan Lin and Ruihua Qi
Information 2019, 10(7), 244; https://doi.org/10.3390/info10070244 - 23 Jul 2019
Cited by 3 | Viewed by 3225
Abstract
In volunteer computing (VC), the expected availability time and the actual availability time provided by volunteer nodes (VNs) are usually inconsistent. Scheduling tasks with precedence constraints in VC under this situation is a new challenge. In this paper, we propose two novel task [...] Read more.
In volunteer computing (VC), the expected availability time and the actual availability time provided by volunteer nodes (VNs) are usually inconsistent. Scheduling tasks with precedence constraints in VC under this situation is a new challenge. In this paper, we propose two novel task assignment algorithms to minimize completion time (makespan) by a flexible task assignment. Firstly, this paper proposes a reliability model, which uses a simple fuzzy model to predict the time interval provided by a VN. This reliability model can reduce inconsistencies between the expected availability time and actual availability time. Secondly, based on the reliability model, this paper proposes an algorithm called EFTT (Earliest Finish Task based on Trust, EFTT), which can minimize makespan. However, EFTT may induce resource waste in task assignment. To make full use of computing resources and reduce task segmentation rate, an algorithm IEFTT (improved earliest finish task based on trust, IEFTT) is further proposed. Finally, experimental results verify the efficiency of the proposed algorithms. Full article
Show Figures

Figure 1

15 pages, 1709 KiB  
Article
A Review Structure Based Ensemble Model for Deceptive Review Spam
by Zhi-Yuan Zeng, Jyun-Jie Lin, Mu-Sheng Chen, Meng-Hui Chen, Yan-Qi Lan and Jun-Lin Liu
Information 2019, 10(7), 243; https://doi.org/10.3390/info10070243 - 17 Jul 2019
Cited by 25 | Viewed by 3950
Abstract
Consumers’ purchase behavior increasingly relies on online reviews. Accordingly, there are more and more deceptive reviews which are harmful to customers. Existing methods to detect spam reviews mainly take the problem as a general text classification task, but they ignore the important features [...] Read more.
Consumers’ purchase behavior increasingly relies on online reviews. Accordingly, there are more and more deceptive reviews which are harmful to customers. Existing methods to detect spam reviews mainly take the problem as a general text classification task, but they ignore the important features of spam reviews. In this paper, we propose a novel model, which splits a review into three parts: first sentence, middle context, and last sentence, based on the discovery that the first and last sentence express stronger emotion than the middle context. Then, the model uses four independent bidirectional long-short term memory (LSTM) models to encode the beginning, middle, end of a review and the whole review into four document representations. After that, the four representations are integrated into one document representation by a self-attention mechanism layer and an attention mechanism layer. Based on three domain datasets, the results of in-domain and mix-domain experiments show that our proposed method performs better than the compared methods. Full article
Show Figures

Figure 1

17 pages, 482 KiB  
Article
A Web Platform for Integrated Vulnerability Assessment and Cyber Risk Management
by Pietro Russo, Alberto Caponi, Marco Leuti and Giuseppe Bianchi
Information 2019, 10(7), 242; https://doi.org/10.3390/info10070242 - 17 Jul 2019
Cited by 12 | Viewed by 5210
Abstract
Cyber risk management is a very important problem for every company connected to the internet. Usually, risk management is done considering only Risk Analysis without connecting it with Vulnerability Assessment, using external and expensive tools. In this paper we present CYber Risk Vulnerability [...] Read more.
Cyber risk management is a very important problem for every company connected to the internet. Usually, risk management is done considering only Risk Analysis without connecting it with Vulnerability Assessment, using external and expensive tools. In this paper we present CYber Risk Vulnerability Management (CYRVM)—a custom-made software platform devised to simplify and improve automation and continuity in cyber security assessment. CYRVM’s main novelties are the combination, in a single and easy-to-use Web-based software platform, of an online Vulnerability Assessment tool within a Risk Analysis framework following the NIST 800-30 Risk Management guidelines and the integration of predictive solutions able to suggest to the user the risk rating and classification. Full article
Show Figures

Figure 1

22 pages, 2533 KiB  
Article
When Relational-Based Applications Go to NoSQL Databases: A Survey
by Geomar A. Schreiner, Denio Duarte and Ronaldo dos Santos Mello
Information 2019, 10(7), 241; https://doi.org/10.3390/info10070241 - 16 Jul 2019
Cited by 7 | Viewed by 6398
Abstract
Several data-centric applications today produce and manipulate a large volume of data, the so-called Big Data. Traditional databases, in particular, relational databases, are not suitable for Big Data management. As a consequence, some approaches that allow the definition and manipulation of large relational [...] Read more.
Several data-centric applications today produce and manipulate a large volume of data, the so-called Big Data. Traditional databases, in particular, relational databases, are not suitable for Big Data management. As a consequence, some approaches that allow the definition and manipulation of large relational data sets stored in NoSQL databases through an SQL interface have been proposed, focusing on scalability and availability. This paper presents a comparative analysis of these approaches based on an architectural classification that organizes them according to their system architectures. Our motivation is that wrapping is a relevant strategy for relational-based applications that intend to move relational data to NoSQL databases (usually maintained in the cloud). We also claim that this research area has some open issues, given that most approaches deal with only a subset of SQL operations or give support to specific target NoSQL databases. Our intention with this survey is, therefore, to contribute to the state-of-art in this research area and also provide a basis for choosing or even designing a relational-to-NoSQL data wrapping solution. Full article
Show Figures

Figure 1

3 pages, 136 KiB  
Editorial
Special Issue “MoDAT: Designing the Market of Data”
by Yukio Ohsawa
Information 2019, 10(7), 240; https://doi.org/10.3390/info10070240 - 13 Jul 2019
Viewed by 2413
Abstract
The fifth International Workshop on the Market of Data (MoDAT2017) was held on November 18th, 2017 in New Orleans, USA in conjunction with IEEE ICDM 2017 [...] Full article
(This article belongs to the Special Issue MoDAT: Designing the Market of Data)
39 pages, 3623 KiB  
Article
Multi-Modal Emotion Aware System Based on Fusion of Speech and Brain Information
by Rania M. Ghoniem, Abeer D. Algarni and Khaled Shaalan
Information 2019, 10(7), 239; https://doi.org/10.3390/info10070239 - 11 Jul 2019
Cited by 10 | Viewed by 4355
Abstract
In multi-modal emotion aware frameworks, it is essential to estimate the emotional features then fuse them to different degrees. This basically follows either a feature-level or decision-level strategy. In all likelihood, while features from several modalities may enhance the classification performance, they might [...] Read more.
In multi-modal emotion aware frameworks, it is essential to estimate the emotional features then fuse them to different degrees. This basically follows either a feature-level or decision-level strategy. In all likelihood, while features from several modalities may enhance the classification performance, they might exhibit high dimensionality and make the learning process complex for the most used machine learning algorithms. To overcome issues of feature extraction and multi-modal fusion, hybrid fuzzy-evolutionary computation methodologies are employed to demonstrate ultra-strong capability of learning features and dimensionality reduction. This paper proposes a novel multi-modal emotion aware system by fusing speech with EEG modalities. Firstly, a mixing feature set of speaker-dependent and independent characteristics is estimated from speech signal. Further, EEG is utilized as inner channel complementing speech for more authoritative recognition, by extracting multiple features belonging to time, frequency, and time–frequency. For classifying unimodal data of either speech or EEG, a hybrid fuzzy c-means-genetic algorithm-neural network model is proposed, where its fitness function finds the optimal fuzzy cluster number reducing the classification error. To fuse speech with EEG information, a separate classifier is used for each modality, then output is computed by integrating their posterior probabilities. Results show the superiority of the proposed model, where the overall performance in terms of accuracy average rates is 98.06%, and 97.28%, and 98.53% for EEG, speech, and multi-modal recognition, respectively. The proposed model is also applied to two public databases for speech and EEG, namely: SAVEE and MAHNOB, which achieve accuracies of 98.21% and 98.26%, respectively. Full article
Show Figures

Figure 1

20 pages, 3426 KiB  
Article
On the Use of Mobile Devices as Controllers for First-Person Navigation in Public Installations
by Spyros Vosinakis and Anna Gardeli
Information 2019, 10(7), 238; https://doi.org/10.3390/info10070238 - 11 Jul 2019
Cited by 3 | Viewed by 3537
Abstract
User navigation in public installations displaying 3D content is mostly supported by mid-air interactions using motion sensors, such as Microsoft Kinect. On the other hand, smartphones have been used as external controllers of large-screen installations or game environments, and they may also be [...] Read more.
User navigation in public installations displaying 3D content is mostly supported by mid-air interactions using motion sensors, such as Microsoft Kinect. On the other hand, smartphones have been used as external controllers of large-screen installations or game environments, and they may also be effective in supporting 3D navigations. This paper aims to examine whether a smartphone-based control is a reliable alternative to mid-air interaction for four degrees of freedom (4-DOF) fist-person navigation, and to discover suitable interaction techniques for a smartphone controller. For this purpose, we setup two studies: A comparative study between smartphone-based and Kinect-based navigation, and a gesture elicitation study to collect user preferences and intentions regarding 3D navigation methods using a smartphone. The results of the first study were encouraging, as users with smartphone input performed at least as good as with Kinect and most of them preferred it as a means of control, whilst the second study produced a number of noteworthy results regarding proposed user gestures and their stance towards using a mobile phone for 3D navigation. Full article
(This article belongs to the Special Issue Wearable Augmented and Mixed Reality Applications)
Show Figures

Figure 1

14 pages, 1197 KiB  
Article
Interactions and Sentiment in Personal Finance Forums: An Exploratory Analysis
by Maurizio Naldi
Information 2019, 10(7), 237; https://doi.org/10.3390/info10070237 - 10 Jul 2019
Cited by 4 | Viewed by 3120
Abstract
The kinds of interactions taking place in an online personal finance forum and the sentiments expressed in its posts may influence the diffusion and usefulness of those forums. We explore a set of major threads on a personal finance forum to assess the [...] Read more.
The kinds of interactions taking place in an online personal finance forum and the sentiments expressed in its posts may influence the diffusion and usefulness of those forums. We explore a set of major threads on a personal finance forum to assess the degree of participation of posters and the prevailing sentiments. The participation appears to be dominated by a small number of posters, with the most frequent poster contributing even more than a third of all posts. Just a small fraction of all possible direct interactions actually take place. Dominance is also confirmed by the large presence of self-replies (i.e., a poster submitting several posts in succession) and rejoinders (i.e., a poster counter-replying to another poster). Though trust is the prevailing sentiment, anger and fear appear to be present as well, though at a lower level, revealing that posts exhibit both aggressive and defensive tones. Full article
Show Figures

Figure 1

20 pages, 3678 KiB  
Article
Author Cooperation Network in Biology and Chemistry Literature during 2014–2018: Construction and Structural Characteristics
by Jinsong Zhang, Xue Yang, Xuan Hu and Taoying Li
Information 2019, 10(7), 236; https://doi.org/10.3390/info10070236 - 09 Jul 2019
Cited by 3 | Viewed by 3362
Abstract
How to explore the interaction between an individual researcher and others in scientific research, find out the degree of association among individual researchers, and evaluate the contribution of researchers to the whole according to the mechanism and law of interaction, is of great [...] Read more.
How to explore the interaction between an individual researcher and others in scientific research, find out the degree of association among individual researchers, and evaluate the contribution of researchers to the whole according to the mechanism and law of interaction, is of great significance to grasp the overall trend of the field. Scholars mostly use bibliometrics to solve these problems and analyze the citation and cooperation among academic achievements from the dimension of “quantity”. However, there is still no mature method for scholars to explore the evolution of knowledge and the relationship between authors; this paper tries to fill this gap. We narrow down the scope of research and focus the research content on the literature in biology and chemistry, collect all the papers from PubMed system (a very comprehensive authoritative database of biomedical papers) during 2014–2018, and take year as a specific analysis unit so as to improve the accuracy of the analysis. Then, we construct the author cooperation networks. Finally, through the above methods and steps, we identify the core authors of each year, analyze the recent cooperative relationships among authors, and predict some changes in the cooperative relationship among the authors based on the networks’ analytical data, evaluating and estimating the role that authors play in the overall field. Therefore, we expect that the cooperative authorship networks supported by the complex network theory can better explain the author’s cooperative relationship. Full article
Show Figures

Figure 1

10 pages, 214 KiB  
Article
Beyond Open Data Hackathons: Exploring Digital Innovation Success
by Fotis Kitsios and Maria Kamariotou
Information 2019, 10(7), 235; https://doi.org/10.3390/info10070235 - 09 Jul 2019
Cited by 21 | Viewed by 4990
Abstract
Previous researchers have examined the motivations of developers to participate in hackathons events and the challenges of open data hackathons, but limited studies have focused on the preparation and evaluation of these contests. Thus, the purpose of this paper is to examine factors [...] Read more.
Previous researchers have examined the motivations of developers to participate in hackathons events and the challenges of open data hackathons, but limited studies have focused on the preparation and evaluation of these contests. Thus, the purpose of this paper is to examine factors that lead to the effective implementation and success of open data hackathons and innovation contests. Six case studies of open data hackathons and innovation contests held between 2014 and 2018 in Thessaloniki were studied in order to identify the factors leading to the success of hackathon contests using criteria from the existing literature. The results show that the most significant factors were clear problem definition, mentors’ participation to the contest, level of support to participants by mentors in order to launch their applications to the market, jury members’ knowledge and experience, the entry requirements of the competition, and the participation of companies, data providers, and academics. Furthermore, organizers should take team members’ competences and skills, as well as the support of post-launch activities for applications, into consideration. This paper can be of interest to organizers of hackathon events because they could be knowledgeable about the factors that should take into consideration for the successful implementation of these events. Full article
(This article belongs to the Special Issue Linked Open Data)
17 pages, 1412 KiB  
Article
A Proximity-Based Semantic Enrichment Approach of Volunteered Geographic Information: A Study Case of Waste of Water
by Liliane Soares da Costa, Italo Lopes Oliveira, Alexandra Moreira and Jugurta Lisboa-Filho
Information 2019, 10(7), 234; https://doi.org/10.3390/info10070234 - 08 Jul 2019
Cited by 1 | Viewed by 3204
Abstract
Volunteered geographic information (VGI) refers to geospatial data that is collected and/or shared voluntarily over the Internet. Its use, however, presents many limitations, such as data quality, difficulty in use and recovery. One alternative to improve its use is to use semantic enrichment, [...] Read more.
Volunteered geographic information (VGI) refers to geospatial data that is collected and/or shared voluntarily over the Internet. Its use, however, presents many limitations, such as data quality, difficulty in use and recovery. One alternative to improve its use is to use semantic enrichment, which is a process to assign semantic resources to metadata and data. This study proposes a VGI semantic enrichment method using linked data and thesaurus. The method has two stages, one automatic and one manual. The automatic stage links VGI contributions to places that are of interest to users. In the manual stage, a thesaurus in the hydric domain was built based on terms found in VGI. Finally, a process is proposed, which returns semantically similar VGI contributions based on queries made by users. To verify the viability of the proposed method, contributions from the VGI system Gota D’Água, related to water waste prevention, were used. Full article
(This article belongs to the Special Issue Linked Open Data)
Show Figures

Figure 1

13 pages, 2100 KiB  
Article
A Novel Approach for Web Service Recommendation Based on Advanced Trust Relationships
by Lijun Duan, Hao Tian and Kun Liu
Information 2019, 10(7), 233; https://doi.org/10.3390/info10070233 - 06 Jul 2019
Cited by 8 | Viewed by 3155
Abstract
Service recommendation is one of the important means of service selection. Aiming at the problems of ignoring the influence of typical data sources such as service information and interaction logs on the similarity calculation of user preferences and insufficient consideration of dynamic trust [...] Read more.
Service recommendation is one of the important means of service selection. Aiming at the problems of ignoring the influence of typical data sources such as service information and interaction logs on the similarity calculation of user preferences and insufficient consideration of dynamic trust relationship in traditional trust-based Web service recommendation methods, a novel approach for Web service recommendation based on advanced trust relationships is presented. After considering the influence of indirect trust paths, the improved calculation about indirect trust degree is proposed. By quantifying the popularity of service, the method of calculating user preference similarity is investigated. Furthermore, the dynamic adjustment mechanism of trust is designed by differentiating the effect of each service recommendation. Integrating these efforts, a service recommendation mechanism is introduced, in which a new service recommendation algorithm is described. Experimental results show that, compared with existing methods, the proposed approach not only has higher accuracy of service recommendation, but also can resist attacks from malicious users more effectively. Full article
(This article belongs to the Special Issue Computational Social Science)
Show Figures

Figure 1

9 pages, 311 KiB  
Article
Idempotent Factorizations of Square-Free Integers
by Barry Fagin
Information 2019, 10(7), 232; https://doi.org/10.3390/info10070232 - 06 Jul 2019
Cited by 2 | Viewed by 3238
Abstract
We explore the class of positive integers n that admit idempotent factorizations n = p ¯ q ¯ such that λ ( n ) ( p ¯ 1 ) ( q ¯ 1 ) , where λ is the Carmichael [...] Read more.
We explore the class of positive integers n that admit idempotent factorizations n = p ¯ q ¯ such that λ ( n ) ( p ¯ 1 ) ( q ¯ 1 ) , where λ is the Carmichael lambda function. Idempotent factorizations with p ¯ and q ¯ prime have received the most attention due to their cryptographic advantages, but there are an infinite number of n with idempotent factorizations containing composite p ¯ and/or q ¯ . Idempotent factorizations are exactly those p ¯ and q ¯ that generate correctly functioning keys in the Rivest–Shamir–Adleman (RSA) 2-prime protocol with n as the modulus. While the resulting p ¯ and q ¯ have no cryptographic utility and therefore should never be employed in that capacity, idempotent factorizations warrant study in their own right as they live at the intersection of multiple hard problems in computer science and number theory. We present some analytical results here. We also demonstrate the existence of maximally idempotent integers, those n for which all bipartite factorizations are idempotent. We show how to construct them, and present preliminary results on their distribution. Full article
Show Figures

Figure 1

15 pages, 7236 KiB  
Article
Assisting Forensic Identification through Unsupervised Information Extraction of Free Text Autopsy Reports: The Disappearances Cases during the Brazilian Military Dictatorship
by Patricia Martin-Rodilla, Marcia L. Hattori and Cesar Gonzalez-Perez
Information 2019, 10(7), 231; https://doi.org/10.3390/info10070231 - 05 Jul 2019
Cited by 3 | Viewed by 4041
Abstract
Anthropological, archaeological, and forensic studies situate enforced disappearance as a strategy associated with the Brazilian military dictatorship (1964–1985), leaving hundreds of persons without identity or cause of death identified. Their forensic reports are the only existing clue for people identification and detection of [...] Read more.
Anthropological, archaeological, and forensic studies situate enforced disappearance as a strategy associated with the Brazilian military dictatorship (1964–1985), leaving hundreds of persons without identity or cause of death identified. Their forensic reports are the only existing clue for people identification and detection of possible crimes associated with them. The exchange of information among institutions about the identities of disappeared people was not a common practice. Thus, their analysis requires unsupervised techniques, mainly due to the fact that their contextual annotation is extremely time-consuming, difficult to obtain, and with high dependence on the annotator. The use of these techniques allows researchers to assist in the identification and analysis in four areas: Common causes of death, relevant body locations, personal belongings terminology, and correlations between actors such as doctors and police officers involved in the disappearances. This paper analyzes almost 3000 textual reports of missing persons in São Paulo city during the Brazilian dictatorship through unsupervised algorithms of information extraction in Portuguese, identifying named entities and relevant terminology associated with these four criteria. The analysis allowed us to observe terminological patterns relevant for people identification (e.g., presence of rings or similar personal belongings) and automate the study of correlations between actors. The proposed system acts as a first classificatory and indexing middleware of the reports and represents a feasible system that can assist researchers working in pattern search among autopsy reports. Full article
(This article belongs to the Special Issue Natural Language Processing and Text Mining)
Show Figures

Figure 1

2 pages, 143 KiB  
Editorial
Editorial for the Special Issue on “Modern Recommender Systems: Approaches, Challenges and Applications”
by Costas Vassilakis and Dionisis Margaris
Information 2019, 10(7), 230; https://doi.org/10.3390/info10070230 - 04 Jul 2019
Cited by 1 | Viewed by 2824
Abstract
Recommender systems are nowadays an indispensable part of most personalized systems implementing information access and content delivery, supporting a great variety of user activities [...] Full article
(This article belongs to the Special Issue Modern Recommender Systems: Approaches, Challenges and Applications)
15 pages, 2417 KiB  
Article
Visualization Method for Arbitrary Cutting of Finite Element Data Based on Radial-Basis Functions
by Shifa Xia, Xiulin Li, Fuye Xu, Lufei Chen and Yong Zhang
Information 2019, 10(7), 229; https://doi.org/10.3390/info10070229 - 03 Jul 2019
Cited by 1 | Viewed by 3251
Abstract
Finite element data form an important basis for engineers to undertake analysis and research. In most cases, it is difficult to generate the internal sections of finite element data and professional operations are required. To display the internal data of entities, a method [...] Read more.
Finite element data form an important basis for engineers to undertake analysis and research. In most cases, it is difficult to generate the internal sections of finite element data and professional operations are required. To display the internal data of entities, a method for generating the arbitrary sections of finite element data based on radial basis function (RBF) interpolation is proposed in this paper. The RBF interpolation function is used to realize arbitrary surface cutting of the entity, and the section can be generated by the triangulation of discrete tangent points. Experimental studies have proved that the method is very convenient for allowing users to obtain visualization results for an arbitrary section through simple and intuitive interactions. Full article
Show Figures

Figure 1

25 pages, 415 KiB  
Article
Multilingual Open Information Extraction: Challenges and Opportunities
by Daniela Barreiro Claro, Marlo Souza, Clarissa Castellã Xavier and Leandro Oliveira
Information 2019, 10(7), 228; https://doi.org/10.3390/info10070228 - 02 Jul 2019
Cited by 27 | Viewed by 6224
Abstract
The number of documents published on the Web in languages other than English grows every year. As a consequence, the need to extract useful information from different languages increases, highlighting the importance of research into Open Information Extraction (OIE) techniques. Different OIE methods [...] Read more.
The number of documents published on the Web in languages other than English grows every year. As a consequence, the need to extract useful information from different languages increases, highlighting the importance of research into Open Information Extraction (OIE) techniques. Different OIE methods have dealt with features from a unique language; however, few approaches tackle multilingual aspects. In those approaches, multilingualism is restricted to processing text in different languages, rather than exploring cross-linguistic resources, which results in low precision due to the use of general rules. Multilingual methods have been applied to numerous problems in Natural Language Processing, achieving satisfactory results and demonstrating that knowledge acquisition for a language can be transferred to other languages to improve the quality of the facts extracted. We argue that a multilingual approach can enhance OIE methods as it is ideal to evaluate and compare OIE systems, and therefore can be applied to the collected facts. In this work, we discuss how the transfer knowledge between languages can increase acquisition from multilingual approaches. We provide a roadmap of the Multilingual Open IE area concerning state of the art studies. Additionally, we evaluate the transfer of knowledge to improve the quality of the facts extracted in each language. Moreover, we discuss the importance of a parallel corpus to evaluate and compare multilingual systems. Full article
(This article belongs to the Special Issue Natural Language Processing and Text Mining)
Show Figures

Figure 1

22 pages, 6723 KiB  
Article
Visualizing the Knowledge Structure and Research Evolution of Infrared Detection Technology Studies
by Rui Hong, Chenglang Xiang, Hui Liu, Adam Glowacz and Wei Pan
Information 2019, 10(7), 227; https://doi.org/10.3390/info10070227 - 01 Jul 2019
Cited by 23 | Viewed by 5040
Abstract
This paper aims to explore the current status, research trends and hotspots related to the field of infrared detection technology through bibliometric analysis and visualization techniques based on the Science Citation Index Expanded (SCIE) and Social Sciences Citation Index (SSCI) articles published between [...] Read more.
This paper aims to explore the current status, research trends and hotspots related to the field of infrared detection technology through bibliometric analysis and visualization techniques based on the Science Citation Index Expanded (SCIE) and Social Sciences Citation Index (SSCI) articles published between 1990 and 2018 using the VOSviewer and Citespace software tools. Based on our analysis, we first present the spatiotemporal distribution of the literature related to infrared detection technology, including annual publications, origin country/region, main research organization, and source publications. Then, we report the main subject categories involved in infrared detection technology. Furthermore, we adopt literature cocitation, author cocitation, keyword co-occurrence and timeline visualization analyses to visually explore the research fronts and trends, and present the evolution of infrared detection technology research. The results show that China, the USA and Italy are the three most active countries in infrared detection technology research and that the Centre National de la Recherche Scientifique has the largest number of publications among related organizations. The most prominent research hotspots in the past five years are vibration thermal imaging, pulse thermal imaging, photonic crystals, skin temperature, remote sensing technology, and detection of delamination defects in concrete. The trend of future research on infrared detection technology is from qualitative to quantitative research development, engineering application research and infrared detection technology combined with other detection techniques. The proposed approach based on the scientific knowledge graph analysis can be used to establish reference information and a research basis for application and development of methods in the domain of infrared detection technology studies. Full article
(This article belongs to the Section Information Systems)
Show Figures

Figure 1

21 pages, 3212 KiB  
Review
Big Data Analytics and Firm Performance: A Systematic Review
by Parisa Maroufkhani, Ralf Wagner, Wan Khairuzzaman Wan Ismail, Mas Bambang Baroto and Mohammad Nourani
Information 2019, 10(7), 226; https://doi.org/10.3390/info10070226 - 01 Jul 2019
Cited by 58 | Viewed by 13314
Abstract
The literature on big data analytics and firm performance is still fragmented and lacking in attempts to integrate the current studies’ results. This study aims to provide a systematic review of contributions related to big data analytics and firm performance. The authors assess [...] Read more.
The literature on big data analytics and firm performance is still fragmented and lacking in attempts to integrate the current studies’ results. This study aims to provide a systematic review of contributions related to big data analytics and firm performance. The authors assess papers listed in the Web of Science index. This study identifies the factors that may influence the adoption of big data analytics in various parts of an organization and categorizes the diverse types of performance that big data analytics can address. Directions for future research are developed from the results. This systematic review proposes to create avenues for both conceptual and empirical research streams by emphasizing the importance of big data analytics in improving firm performance. In addition, this review offers both scholars and practitioners an increased understanding of the link between big data analytics and firm performance. Full article
(This article belongs to the Section Information Applications)
Show Figures

Figure 1

14 pages, 737 KiB  
Article
Eligibility of BPMN Models for Business Process Redesign
by George Tsakalidis, Kostas Vergidis, Georgia Kougka and Anastasios Gounaris
Information 2019, 10(7), 225; https://doi.org/10.3390/info10070225 - 01 Jul 2019
Cited by 23 | Viewed by 8743
Abstract
Business process redesign (BPR) is an organizational initiative for achieving competitive multi-faceted advantages regarding business processes, in terms of cycle time, quality, cost, customer satisfaction and other critical performance metrics. In spite of the fact that BPR tools and methodologies are increasingly being [...] Read more.
Business process redesign (BPR) is an organizational initiative for achieving competitive multi-faceted advantages regarding business processes, in terms of cycle time, quality, cost, customer satisfaction and other critical performance metrics. In spite of the fact that BPR tools and methodologies are increasingly being adopted, process innovation efforts have proven ineffective in delivering the expected outcome. This paper investigates the eligibility of BPMN process models towards the application of redesign methods inspired by data-flow communities. In previous work, the transformation of a business process model to a directed acyclic graph (DAG) has yielded notable optimization results for determining average performance of process executions consisting of ad-hoc processes. Still, the utilization encountered drawbacks due to a lack of input specification, complexity assessment and normalization of the BPMN model and application to more generic business process cases. This paper presents an assessment mechanism that measures the eligibility of a BPMN model and its capability to be effectively transformed to a DAG and be further subjected to data-centric workflow optimization methods. The proposed mechanism evaluates the model type, complexity metrics, normalization and optimization capability of candidate process models, while at the same time allowing users to set their desired complexity thresholds. An indicative example is used to demonstrate the assessment phases and to illustrate the usability of the proposed mechanism towards the advancement and facilitation of the optimization phase. Finally, the authors review BPMN models from both an SOA-based business process design (BPD) repository and relevant literature and assess their eligibility. Full article
(This article belongs to the Section Information Applications)
Show Figures

Figure 1

16 pages, 2980 KiB  
Article
A Two-Stage Household Electricity Demand Estimation Approach Based on Edge Deep Sparse Coding
by Yaoxian Liu, Yi Sun and Bin Li
Information 2019, 10(7), 224; https://doi.org/10.3390/info10070224 - 01 Jul 2019
Cited by 5 | Viewed by 3127
Abstract
The widespread popularity of smart meters enables the collection of an immense amount of fine-grained data, thereby realizing a two-way information flow between the grid and the customer, along with personalized interaction services, such as precise demand response. These services basically rely on [...] Read more.
The widespread popularity of smart meters enables the collection of an immense amount of fine-grained data, thereby realizing a two-way information flow between the grid and the customer, along with personalized interaction services, such as precise demand response. These services basically rely on the accurate estimation of electricity demand, and the key challenge lies in the high volatility and uncertainty of load profiles and the tremendous communication pressure on the data link or computing center. This study proposed a novel two-stage approach for estimating household electricity demand based on edge deep sparse coding. In the first sparse coding stage, the status of electrical devices was introduced into the deep non-negative k-means-singular value decomposition (K-SVD) sparse algorithm to estimate the behavior of customers. The patterns extracted in the first stage were used to train the long short-term memory (LSTM) network and forecast household electricity demand in the subsequent 30 min. The developed method was implemented on the Python platform and tested on AMPds dataset. The proposed method outperformed the multi-layer perception (MLP) by 51.26%, the autoregressive integrated moving average model (ARIMA) by 36.62%, and LSTM with shallow K-SVD by 16.4% in terms of mean absolute percent error (MAPE). In the field of mean absolute error and root mean squared error, the improvement was 53.95% and 36.73% compared with MLP, 28.47% and 23.36% compared with ARIMA, 11.38% and 18.16% compared with LSTM with shallow K-SVD. The results of the experiments demonstrated that the proposed method can provide considerable and stable improvement in household electricity demand estimation. Full article
Show Figures

Figure 1

13 pages, 271 KiB  
Article
Multiple Goal Linear Programming-Based Decision Preference Inconsistency Recognition and Adjustment Strategies
by Jian-Zhang Wu, Li Huang, Rui-Jie Xi and Yi-Ping Zhou
Information 2019, 10(7), 223; https://doi.org/10.3390/info10070223 - 01 Jul 2019
Cited by 10 | Viewed by 2788
Abstract
The purpose of this paper is to enrich the decision preference information inconsistency check and adjustment method in the context of capacity-based multiple criteria decision making. We first show that almost all the preference information of a decision maker can be represented as [...] Read more.
The purpose of this paper is to enrich the decision preference information inconsistency check and adjustment method in the context of capacity-based multiple criteria decision making. We first show that almost all the preference information of a decision maker can be represented as a collection of linear constraints. By introducing the positive and negative deviations, we construct the the multiple goal linear programming (MGLP)-based inconsistency recognition model to find out the redundant and contradicting constraints. Then, based on the redundancy and contradiction degrees, we propose three types of adjustment strategies and accordingly adopt some explicit and implicit indices w.r.t. the capacity to test the implementation effect of the adjustment strategy. The empirical analyses verify that all the strategies are competent in the adjustment task, and the second strategy usually costs relatively less effort. It is shown that the MGLP-based inconsistency recognition and adjustment method needs less background knowledge and is applicable for dealing with some complicated decision preference information. Full article
(This article belongs to the Section Information Theory and Methodology)
17 pages, 5061 KiB  
Article
Hadoop Performance Analysis Model with Deep Data Locality
by Sungchul Lee, Ju-Yeon Jo and Yoohwan Kim
Information 2019, 10(7), 222; https://doi.org/10.3390/info10070222 - 27 Jun 2019
Cited by 6 | Viewed by 7523
Abstract
Background: Hadoop has become the base framework on the big data system via the simple concept that moving computation is cheaper than moving data. Hadoop increases a data locality in the Hadoop Distributed File System (HDFS) to improve the performance of the system. [...] Read more.
Background: Hadoop has become the base framework on the big data system via the simple concept that moving computation is cheaper than moving data. Hadoop increases a data locality in the Hadoop Distributed File System (HDFS) to improve the performance of the system. The network traffic among nodes in the big data system is reduced by increasing a data-local on the machine. Traditional research increased the data-local on one of the MapReduce stages to increase the Hadoop performance. However, there is currently no mathematical performance model for the data locality on the Hadoop. Methods: This study made the Hadoop performance analysis model with data locality for analyzing the entire process of MapReduce. In this paper, the data locality concept on the map stage and shuffle stage was explained. Also, this research showed how to apply the Hadoop performance analysis model to increase the performance of the Hadoop system by making the deep data locality. Results: This research proved the deep data locality for increasing performance of Hadoop via three tests, such as, a simulation base test, a cloud test and a physical test. According to the test, the authors improved the Hadoop system by over 34% by using the deep data locality. Conclusions: The deep data locality improved the Hadoop performance by reducing the data movement in HDFS. Full article
(This article belongs to the Special Issue Big Data Research, Development, and Applications––Big Data 2018)
Show Figures

Figure 1

18 pages, 874 KiB  
Article
Analysis of Usability for the Dice CAPTCHA
by Alessia Amelio, Ivo Rumenov Draganov, Radmila Janković and Dejan Tanikić
Information 2019, 10(7), 221; https://doi.org/10.3390/info10070221 - 26 Jun 2019
Cited by 1 | Viewed by 4280
Abstract
This paper explores the usability of the Dice CAPTCHA via analysis of the time spent to solve the CAPTCHA, and number of tries for solving the CAPTCHA. The experiment was conducted on a set of 197 subjects who use the Internet, and are [...] Read more.
This paper explores the usability of the Dice CAPTCHA via analysis of the time spent to solve the CAPTCHA, and number of tries for solving the CAPTCHA. The experiment was conducted on a set of 197 subjects who use the Internet, and are discriminated by age, daily Internet usage in hours, Internet experience in years, and type of device where a solution to the CAPTCHA is found. Each user was asked to find a solution to the Dice CAPTCHA on a tablet or laptop, and the time to successfully find a solution to the CAPTCHA for a given number of attempts was registered. Analysis was performed on the collected data via association rule mining and artificial neural network. It revealed that the time to find a solution in a given number of attempts of the CAPTCHA depended on different combinations of values of user’s features, as well as the most meaningful features influencing the solution time. In addition, this dependence was explored through prediction of the CAPTCHA solution time from the user’s features via artificial neural network. The obtained results are very helpful to analyze the combination of features having an influence on the CAPTCHA solution, and consequently, to find the CAPTCHA mostly complying to the postulate of “ideal” test. Full article
(This article belongs to the Special Issue Artificial Intelligence—Methodology, Systems, and Applications)
Show Figures

Figure 1

Previous Issue
Next Issue
Back to TopTop