Next Issue
Volume 10, December
Previous Issue
Volume 10, October
 
 

Information, Volume 10, Issue 11 (November 2019) – 33 articles

Cover Story (view full-size image): We here present the technologies related to and the theoretical background of an intelligent interconnected infrastructure for public security and safety. The framework’s innovation lies in the intelligent combination of devices and human information oriented towards human and situational awareness in order to provide a protected and secure environment for citizens. The framework is currently being used to support visitors in public spaces and events by creating the appropriate infrastructure to address a set of urgent situations that may arise, including health-related problems and missing children in overcrowded environments. It works to support smart links between humans and entities in order to achieve goals and to adapt device operation to comply with human objectives, profiles, and privacy. View this paper.
  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Reader to open them.
Order results
Result details
Section
Select all
Export citation of selected articles as:
18 pages, 3049 KiB  
Article
Rolling-Bearing Fault-Diagnosis Method Based on Multimeasurement Hybrid-Feature Evaluation
by Jianghua Ge, Guibin Yin, Yaping Wang, Di Xu and Fen Wei
Information 2019, 10(11), 359; https://doi.org/10.3390/info10110359 - 19 Nov 2019
Cited by 7 | Viewed by 2510
Abstract
To improve the accuracy of rolling-bearing fault diagnosis and solve the problem of incomplete information about the feature-evaluation method of the single-measurement model, this paper combines the advantages of various measurement models and proposes a fault-diagnosis method based on multi-measurement hybrid-feature evaluation. In [...] Read more.
To improve the accuracy of rolling-bearing fault diagnosis and solve the problem of incomplete information about the feature-evaluation method of the single-measurement model, this paper combines the advantages of various measurement models and proposes a fault-diagnosis method based on multi-measurement hybrid-feature evaluation. In this study, an original feature set was first obtained through analyzing a collected vibration signal. The feature set included time- and frequency-domain features, and also, based on the empirical-mode decomposition (EMD)-obtained time-frequency domain, energy and Lempel–Ziv complexity features. Second, a feature-evaluation framework of multiplicative hybrid models was constructed based on correlation, distance, information, and other measures. The framework was used to rank features and obtain rank weights. Then the weights were multiplied by the features to obtain a new feature set. Finally, the fault-feature set was used as the input of the category-divergence fault-diagnosis model based on kernel principal component analysis (KPCA), and the fault-diagnosis model was based on a support vector machine (SVM). The clustering effect of different fault categories was more obvious and classification accuracy was improved. Full article
(This article belongs to the Section Information Applications)
Show Figures

Figure 1

10 pages, 249 KiB  
Article
Information Gain in Event Space Reflects Chance and Necessity Components of an Event
by Georg F. Weber
Information 2019, 10(11), 358; https://doi.org/10.3390/info10110358 - 19 Nov 2019
Cited by 2 | Viewed by 1821
Abstract
Information flow for occurrences in phase space can be assessed through the application of the Lyapunov characteristic exponent (multiplicative ergodic theorem), which is positive for non-linear systems that act as information sources and is negative for events that constitute information sinks. Attempts to [...] Read more.
Information flow for occurrences in phase space can be assessed through the application of the Lyapunov characteristic exponent (multiplicative ergodic theorem), which is positive for non-linear systems that act as information sources and is negative for events that constitute information sinks. Attempts to unify the reversible descriptions of dynamics with the irreversible descriptions of thermodynamics have replaced phase space models with event space models. The introduction of operators for time and entropy in lieu of traditional trajectories has consequently limited—to eigenvectors and eigenvalues—the extent of knowable details about systems governed by such depictions. In this setting, a modified Lyapunov characteristic exponent for vector spaces can be used as a descriptor for the evolution of information, which is reflective of the associated extent of undetermined features. This novel application of the multiplicative ergodic theorem leads directly to the formulation of a dimension that is a measure for the information gain attributable to the occurrence. Thus, it provides a readout for the magnitudes of chance and necessity that contribute to an event. Related algorithms express a unification of information content, degree of randomness, and complexity (fractal dimension) in event space. Full article
21 pages, 1490 KiB  
Article
A New Methodology for Automatic Cluster-Based Kriging Using K-Nearest Neighbor and Genetic Algorithms
by Carlos Yasojima, João Protázio, Bianchi Meiguins, Nelson Neto and Jefferson Morais
Information 2019, 10(11), 357; https://doi.org/10.3390/info10110357 - 18 Nov 2019
Cited by 6 | Viewed by 3532
Abstract
Kriging is a geostatistical interpolation technique that performs the prediction of observations in unknown locations through previously collected data. The modelling of the variogram is an essential step of the kriging process because it drives the accuracy of the interpolation model. The conventional [...] Read more.
Kriging is a geostatistical interpolation technique that performs the prediction of observations in unknown locations through previously collected data. The modelling of the variogram is an essential step of the kriging process because it drives the accuracy of the interpolation model. The conventional method of variogram modelling consists of using specialized knowledge and in-depth study to determine which parameters are suitable for the theoretical variogram. However, this situation is not always possible, and, in this case, it becomes interesting to use an automatic process. Thus, this work aims to propose a new methodology to automate the estimation of theoretical variogram parameters of the kriging process. The proposed methodology is based on preprocessing techniques, data clustering, genetic algorithms, and the K-Nearest Neighbor classifier (KNN). The performance of the methodology was evaluated using two databases, and it was compared to other optimization techniques widely used in the literature. The impacts of the clustering step on the stationary hypothesis were also investigated with and without trend removal techniques. The results showed that, in this automated proposal, the clustering process increases the accuracy of the kriging prediction. However, it generates groups that might not be stationary. Genetic algorithms are easily configurable with the proposed heuristic when setting the variable ranges in comparison to other optimization techniques, and the KNN method is satisfactory in solving some problems caused by the clustering task and allocating unknown points into previously determined clusters. Full article
(This article belongs to the Section Artificial Intelligence)
Show Figures

Figure 1

17 pages, 5011 KiB  
Article
An Intrusion Detection System Based on a Simplified Residual Network
by Yuelei Xiao and Xing Xiao
Information 2019, 10(11), 356; https://doi.org/10.3390/info10110356 - 18 Nov 2019
Cited by 32 | Viewed by 3597
Abstract
Residual networks (ResNets) are prone to over-fitting for low-dimensional and small-scale datasets. And the existing intrusion detection systems (IDSs) fail to provide better performance, especially for remote-to-local (R2L) and user-to-root (U2R) attacks. To overcome these problems, a simplified residual network (S-ResNet) is proposed [...] Read more.
Residual networks (ResNets) are prone to over-fitting for low-dimensional and small-scale datasets. And the existing intrusion detection systems (IDSs) fail to provide better performance, especially for remote-to-local (R2L) and user-to-root (U2R) attacks. To overcome these problems, a simplified residual network (S-ResNet) is proposed in this paper, which consists of several cascaded, simplified residual blocks. Compared with the original residual block, the simplified residual block deletes a weight layer and two batch normalization (BN) layers, adds a pooling layer, and replaces the rectified linear unit (ReLU) function with the parametric rectified linear unit (PReLU) function. Based on the S-ResNet, a novel IDS was proposed in this paper, which includes a data preprocessing module, a random oversampling module, a S-Resnet layer, a full connection layer and a Softmax layer. The experimental results on the NSL-KDD dataset show that the IDS based on the S-ResNet has a higher accuracy, recall and F1-score than the equal scale ResNet-based IDS, especially for R2L and U2R attacks. And the former has faster convergence velocity than the latter. It proves that the S-ResNet reduces the complexity of the network and effectively prevents over-fitting; thus, it is more suitable for low-dimensional and small-scale datasets than ResNet. Furthermore, the experimental results on the NSL-KDD datasets also show that the IDS based on the S-ResNet achieves better performance in terms of accuracy and recall compared to the existing IDSs, especially for R2L and U2R attacks. Full article
(This article belongs to the Section Artificial Intelligence)
Show Figures

Figure 1

15 pages, 3633 KiB  
Article
Serious Game iDO: Towards Better Education in Dementia Care
by Rytis Maskeliūnas, Robertas Damaševičius, Connie Lethin, Andrius Paulauskas, Anna Esposito, Mauro Catena and Vincenzo Aschettino
Information 2019, 10(11), 355; https://doi.org/10.3390/info10110355 - 18 Nov 2019
Cited by 20 | Viewed by 6097
Abstract
We describe the iDO serious game developed during implementation of the Innovative Digital Training Opportunities on Dementia for Direct Care Workers (IDO) project. The project targets formal and informal caregivers of persons with dementia in order to improve caregiver knowledge and competences skills [...] Read more.
We describe the iDO serious game developed during implementation of the Innovative Digital Training Opportunities on Dementia for Direct Care Workers (IDO) project. The project targets formal and informal caregivers of persons with dementia in order to improve caregiver knowledge and competences skills with a non-traditional source of training. This paper describes the steps faced to define the iDO caregiver behavior improvement model, design of game mechanics, development of game art and game characters, and implementation of gameplay. Furthermore, it aimed to assess the direct impact of the game on caregivers (n = 48) and seniors with early signs of dementia (n = 14) in Lithuania measured with the Geriatric Depression Scale (GDS) and Dementia Attitudes Scale (DAS). The caregivers’ GDS scores showed a decrease in negative answers from 13.4% (pre-game survey) to 5.2% (post-game survey). The seniors’ GDS scores showed a decrease in negative answers from 24.9% (pre-game survey) to 10.9% (post-game survey). The overall DAS scores increased from 6.07 in the pre-game survey to 6.41 in the post-game survey, statistically significant for both caregivers and seniors (p < 0.001), respectively. We conclude that the game aroused positive moods and attitudes for future caregivers of persons with dementia, indicating a more relaxed status and a decreased fear in accomplishing the caring process. Full article
(This article belongs to the Special Issue Advances in Mobile Gaming and Games-based Leaning)
Show Figures

Figure 1

18 pages, 1802 KiB  
Article
Dense Model for Automatic Image Description Generation with Game Theoretic Optimization
by Sreela S R and Sumam Mary Idicula
Information 2019, 10(11), 354; https://doi.org/10.3390/info10110354 - 15 Nov 2019
Cited by 4 | Viewed by 3572
Abstract
Due to the rapid growth of deep learning technologies, automatic image description generation is an interesting problem in computer vision and natural language generation. It helps to improve access to photo collections on social media and gives guidance for visually impaired people. Currently, [...] Read more.
Due to the rapid growth of deep learning technologies, automatic image description generation is an interesting problem in computer vision and natural language generation. It helps to improve access to photo collections on social media and gives guidance for visually impaired people. Currently, deep neural networks play a vital role in computer vision and natural language processing tasks. The main objective of the work is to generate the grammatically correct description of the image using the semantics of the trained captions. An encoder-decoder framework using the deep neural system is used to implement an image description generation task. The encoder is an image parsing module, and the decoder is a surface realization module. The framework uses Densely connected convolutional neural networks (Densenet) for image encoding and Bidirectional Long Short Term Memory (BLSTM) for language modeling, and the outputs are given to bidirectional LSTM in the caption generator, which is trained to optimize the log-likelihood of the target description of the image. Most of the existing image captioning works use RNN and LSTM for language modeling. RNNs are computationally expensive with limited memory. LSTM checks the inputs in one direction. BLSTM is used in practice, which avoids the problem of RNN and LSTM. In this work, the selection of the best combination of words in caption generation is made using beam search and game theoretic search. The results show the game theoretic search outperforms beam search. The model was evaluated with the standard benchmark dataset Flickr8k. The Bilingual Evaluation Understudy (BLEU) score is taken as the evaluation measure of the system. A new evaluation measure called GCorrectwas used to check the grammatical correctness of the description. The performance of the proposed model achieves greater improvements over previous methods on the Flickr8k dataset. The proposed model produces grammatically correct sentences for images with a GCorrect of 0.040625 and a BLEU score of 69.96% Full article
(This article belongs to the Section Artificial Intelligence)
Show Figures

Figure 1

29 pages, 4338 KiB  
Article
Methods and Challenges Using Multispectral and Hyperspectral Images for Practical Change Detection Applications
by Chiman Kwan
Information 2019, 10(11), 353; https://doi.org/10.3390/info10110353 - 15 Nov 2019
Cited by 38 | Viewed by 4529
Abstract
Multispectral (MS) and hyperspectral (HS) images have been successfully and widely used in remote sensing applications such as target detection, change detection, and anomaly detection. In this paper, we aim at reviewing recent change detection papers and raising some challenges and opportunities in [...] Read more.
Multispectral (MS) and hyperspectral (HS) images have been successfully and widely used in remote sensing applications such as target detection, change detection, and anomaly detection. In this paper, we aim at reviewing recent change detection papers and raising some challenges and opportunities in the field from a practitioner’s viewpoint using MS and HS images. For example, can we perform change detection using synthetic hyperspectral images? Can we use temporally-fused images to perform change detection? Some of these areas are ongoing and will require more research attention in the coming years. Moreover, in order to understand the context of our paper, some recent and representative algorithms in change detection using MS and HS images are included, and their advantages and disadvantages will be highlighted. Full article
Show Figures

Figure 1

14 pages, 884 KiB  
Article
The Influence of Age, Gender, and Cognitive Ability on the Susceptibility to Persuasive Strategies
by Aisha Muhammad Abdullahi, Kiemute Oyibo, Rita Orji and Abdullahi Abubakar Kawu
Information 2019, 10(11), 352; https://doi.org/10.3390/info10110352 - 15 Nov 2019
Cited by 14 | Viewed by 5107
Abstract
The fact that individuals may react differently toward persuasive strategies gave birth to a shift in persuasive technology (PT) design from the one-size-fits-all traditional approach to the individualized approach which conforms to individuals’ preferences. Given that learners’ gender, age, and cognitive level can [...] Read more.
The fact that individuals may react differently toward persuasive strategies gave birth to a shift in persuasive technology (PT) design from the one-size-fits-all traditional approach to the individualized approach which conforms to individuals’ preferences. Given that learners’ gender, age, and cognitive level can affect their response to different learning instructions, it is given primacy of place in persuasive educational technology (PET) design. However, the effect of gender, age, and cognitive ability on learners’ susceptibility to persuasive strategies did not receive the right attention in the extant literature. To close this gap, we carried out an empirical study among 461 participants to investigate whether learners’ gender, age, and cognitive ability significantly affect learners’ susceptibility to three key persuasive strategies (social learning, reward, and trustworthiness) in PETs. The results of a repeated measure analysis of variance (RM-ANOVA) revealed that people with high cognitive level are more likely to be susceptible to social learning, while people with low cognitive level are more likely to be susceptible to trustworthiness. Comparatively, our results revealed that males are more likely to be susceptible to social learning, while females are more likely to be susceptible to reward and trustworthiness. Furthermore, our results revealed that younger adults are more likely to be susceptible to social learning and reward, while older adults are more likely to be susceptible to trustworthiness. Our findings reveal potential persuasive strategies which designers can employ to personalize PTs to individual users in higher learning based on their susceptibility profile determined by age, gender, and cognitive level. Full article
(This article belongs to the Special Issue Personalizing Persuasive Technologies)
Show Figures

Figure 1

11 pages, 231 KiB  
Article
Evaluating Museum Virtual Tours: The Case Study of Italy
by Katerina Kabassi, Alessia Amelio, Vasileios Komianos and Konstantinos Oikonomou
Information 2019, 10(11), 351; https://doi.org/10.3390/info10110351 - 14 Nov 2019
Cited by 33 | Viewed by 5469
Abstract
Virtual tours in museums are an ideal solution for those that are not able to visit a museum or those who want to have a small taste of what is presented in the museum before their visit. However, these tours often encounter severe [...] Read more.
Virtual tours in museums are an ideal solution for those that are not able to visit a museum or those who want to have a small taste of what is presented in the museum before their visit. However, these tours often encounter severe problems while users interact with them. In order to check the status of virtual tours of museums, we present the implementation of an evaluation experiment that uses a combination of two multi-criteria decision making theories, namely the analytic hierarchy process (AHP) and the fuzzy technique for order of preference by similarity to ideal solution (TOPSIS). AHP has been used for the estimation of the weights of the heuristics and fuzzy TOPSIS has been used for the evaluation of virtual tours of museums. This paper presents the exact steps that have to be followed in order to implement such an experiment and run an example experiment for virtual tours of Italian museums. Full article
16 pages, 2145 KiB  
Article
A Novel Approach to Working Memory Training Based on Robotics and AI
by Vladimir Araujo, Diego Mendez and Alejandra Gonzalez
Information 2019, 10(11), 350; https://doi.org/10.3390/info10110350 - 12 Nov 2019
Cited by 4 | Viewed by 3113
Abstract
Working memory is an important function for human cognition since several day-to-day activities are related to it, such as remembering a direction or developing a mental calculation. Unfortunately, working memory deficiencies affect performance in work or education related activities, mainly due to lack [...] Read more.
Working memory is an important function for human cognition since several day-to-day activities are related to it, such as remembering a direction or developing a mental calculation. Unfortunately, working memory deficiencies affect performance in work or education related activities, mainly due to lack of concentration, and, with the goal to improve this, many software applications have been developed. However, sometimes the user ends up bored with these games and drops out easily. To cope with this, our work explores the use of intelligent robotics and dynamic difficulty adjustment mechanisms to develop a novel working memory training system. The proposed system, based on the Nao robotic platform, is composed of three main components: First, the N-back task allows stimulating the working memory by remembering visual sequences. Second, a BDI model implements an intelligent agent for decision-making during the progress of the game. Third, a fuzzy controller, as a dynamic difficulty adjustment system, generates customized levels according to the user. The experimental results of our system, when compared to a computer-based implementation of the N-back game, show a significant improvement on the performance of the user in the game, which might relate to an improvement in their working memory. Additionally, by providing a friendly and interactive interface, the participants have reported a more immersive and better game experience when using the robotic-based system. Full article
(This article belongs to the Section Information Applications)
Show Figures

Figure 1

26 pages, 2290 KiB  
Review
A Review on UAV-Based Applications for Precision Agriculture
by Dimosthenis C. Tsouros, Stamatia Bibi and Panagiotis G. Sarigiannidis
Information 2019, 10(11), 349; https://doi.org/10.3390/info10110349 - 11 Nov 2019
Cited by 530 | Viewed by 34869
Abstract
Emerging technologies such as Internet of Things (IoT) can provide significant potential in Smart Farming and Precision Agriculture applications, enabling the acquisition of real-time environmental data. IoT devices such as Unmanned Aerial Vehicles (UAVs) can be exploited in a variety of applications related [...] Read more.
Emerging technologies such as Internet of Things (IoT) can provide significant potential in Smart Farming and Precision Agriculture applications, enabling the acquisition of real-time environmental data. IoT devices such as Unmanned Aerial Vehicles (UAVs) can be exploited in a variety of applications related to crops management, by capturing high spatial and temporal resolution images. These technologies are expected to revolutionize agriculture, enabling decision-making in days instead of weeks, promising significant reduction in cost and increase in the yield. Such decisions enable the effective application of farm inputs, supporting the four pillars of precision agriculture, i.e., apply the right practice, at the right place, at the right time and with the right quantity. However, the actual proliferation and exploitation of UAVs in Smart Farming has not been as robust as expected mainly due to the challenges confronted when selecting and deploying the relevant technologies, including the data acquisition and image processing methods. The main problem is that still there is no standardized workflow for the use of UAVs in such applications, as it is a relatively new area. In this article, we review the most recent applications of UAVs for Precision Agriculture. We discuss the most common applications, the types of UAVs exploited and then we focus on the data acquisition methods and technologies, appointing the benefits and drawbacks of each one. We also point out the most popular processing methods of aerial imagery and discuss the outcomes of each method and the potential applications of each one in the farming operations. Full article
(This article belongs to the Special Issue IoT Applications and Industry 4.0)
Show Figures

Figure 1

25 pages, 3395 KiB  
Article
Precision Agriculture: A Remote Sensing Monitoring System Architecture
by Anna Triantafyllou, Panagiotis Sarigiannidis and Stamatia Bibi
Information 2019, 10(11), 348; https://doi.org/10.3390/info10110348 - 09 Nov 2019
Cited by 76 | Viewed by 15673
Abstract
Smart Farming is a development that emphasizes on the use of modern technologies in the cyber-physical field management cycle. Technologies such as the Internet of Things (IoT) and Cloud Computing have accelerated the digital transformation of the conventional agricultural practices promising increased production [...] Read more.
Smart Farming is a development that emphasizes on the use of modern technologies in the cyber-physical field management cycle. Technologies such as the Internet of Things (IoT) and Cloud Computing have accelerated the digital transformation of the conventional agricultural practices promising increased production rate and product quality. The adoption of smart farming though is hampered because of the lack of models providing guidance to practitioners regarding the necessary components that constitute IoT-based monitoring systems. To guide the process of designing and implementing Smart farming monitoring systems, in this paper we propose a generic reference architecture model, taking also into consideration a very important non-functional requirement, the energy consumption restriction. Moreover, we present and discuss the technologies that incorporate the seven layers of the architecture model that are the Sensor Layer, the Link Layer, the Encapsulation Layer, the Middleware Layer, the Configuration Layer, the Management Layer and the Application Layer. Furthermore, the proposed Reference Architecture model is exemplified in a real-world application for surveying Saffron agriculture in Kozani, Greece. Full article
(This article belongs to the Special Issue IoT Applications and Industry 4.0)
Show Figures

Figure 1

16 pages, 1526 KiB  
Article
Can Message-Tailoring Based on Regulatory Fit Theory Improve the Efficacy of Persuasive Physical Activity Systems?
by Leila Sadat Rezai, Jessie Chin, Reicelis Casares-Li, Fan He, Rebecca Bassett-Gunter and Catherine Burns
Information 2019, 10(11), 347; https://doi.org/10.3390/info10110347 - 08 Nov 2019
Cited by 1 | Viewed by 2734
Abstract
Background: Many behaviour-change technologies have been designed to help people with a sedentary lifestyle to become more physically active. However, challenges exist in designing systems that work effectively. One of the key challenges is that many of those technologies do not account for [...] Read more.
Background: Many behaviour-change technologies have been designed to help people with a sedentary lifestyle to become more physically active. However, challenges exist in designing systems that work effectively. One of the key challenges is that many of those technologies do not account for differences in individuals’ psychological characteristics. To address that problem, tailoring the communication between a system and its users has been proposed and examined. Although in the research related to public health education, message tailoring has been studied extensively as a technique to communicate health information and to educate people, its use in the design of behaviour-change technologies has not been adequately investigated. Objective: The goal of this study was to explore the impact of message tailoring, when tailoring was grounded in Higgins’ Regulatory Fit Theory, and messages were constructed to promote physical activity. Method: An email intervention was designed and developed that sent participants daily health messages for 14 consecutive days. There were three categories of messages: reminders, promotion-, and prevention-messages. The effect of the messages on behaviour was compared between those who received messages that fitted their self-regulatory orientation, versus those who received non-fitted messages. Results: Participants who received promotion- or prevention-messages walked for longer periods of time, compared to those who received reminders in the control group. When comparing the first two groups, promotion-message-recipients on average walked more than those who received prevention-messages. In other words, promotion messages acted more persuasively than prevention-messages and reminders. Contrary to our hypothesis, those individuals who received messages that fitted their self-regulatory orientation did not walk more than those who received non-fitted messages. Conclusions: The efficacy of Higgins’ Regulatory Fit Theory in the design of tailored health messages was examined. This study did not find support for the use of that theory in guiding the design of persuasive health messages that promote physical activity. Therefore, more research is necessary to investigate the effectiveness of tailoring strategies. Full article
(This article belongs to the Special Issue Personalizing Persuasive Technologies)
Show Figures

Figure 1

11 pages, 2006 KiB  
Article
Combining Visual Contrast Information with Sound Can Produce Faster Decisions
by Birgitta Dresp-Langley and Marie Monfouga
Information 2019, 10(11), 346; https://doi.org/10.3390/info10110346 - 07 Nov 2019
Cited by 5 | Viewed by 2828
Abstract
Pieron’s and Chocholle’s seminal psychophysical work predicts that human response time to information relative to visual contrast and/or sound frequency decreases when contrast intensity or sound frequency increases. The goal of this study is to bring to the forefront the ability of individuals [...] Read more.
Pieron’s and Chocholle’s seminal psychophysical work predicts that human response time to information relative to visual contrast and/or sound frequency decreases when contrast intensity or sound frequency increases. The goal of this study is to bring to the forefront the ability of individuals to use visual contrast intensity and sound frequency in combination for faster perceptual decisions of relative depth (“nearer”) in planar (2D) object configurations based on physical variations in luminance contrast. Computer controlled images with two abstract patterns of varying contrast intensity, one on the left and one on the right, preceded or not by a pure tone of varying frequency, were shown to healthy young humans in controlled experimental sequences. Their task (two-alternative, forced-choice) was to decide as quickly as possible which of two patterns, the left or the right one, in a given image appeared to “stand out as if it were nearer” in terms of apparent (subjective) visual depth. The results showed that the combinations of varying relative visual contrast with sounds of varying frequency exploited here produced an additive effect on choice response times in terms of facilitation, where a stronger visual contrast combined with a higher sound frequency produced shorter forced-choice response times. This new effect is predicted by audio-visual probability summation. Full article
(This article belongs to the Special Issue Information-Centred Approaches to Visual Perception)
Show Figures

Figure 1

18 pages, 17734 KiB  
Article
A Smart Energy Harvesting Platform for Wireless Sensor Network Applications
by Gabriel Filios, Ioannis Katsidimas, Sotiris Nikoletseas and Ioannis Tsenempis
Information 2019, 10(11), 345; https://doi.org/10.3390/info10110345 - 06 Nov 2019
Cited by 2 | Viewed by 3491
Abstract
Advances in micro-electro-mechanical systems (MEMS) as well as the solutions for power scavenging can now provide feasible alternatives in a variety of applications. Wireless sensor networks (WSN), which operate on rechargeable batteries, could be based on a fresh basis which aims both at [...] Read more.
Advances in micro-electro-mechanical systems (MEMS) as well as the solutions for power scavenging can now provide feasible alternatives in a variety of applications. Wireless sensor networks (WSN), which operate on rechargeable batteries, could be based on a fresh basis which aims both at environmental power collection and wireless charging in various shapes and scales. Consequently, a potential illimitable energy supply can override the hypothesis of the limited energy budget (which can also impact the system’s efficiency). The presented platform is able to efficiently power a low power IoT system with processing, sensing and wireless transmission potentials. It incorporates a cutting-edge energy management IC that enables exceptional energy harvesting, applicable on low power and downsized energy generators. In contrast to other schemes, it supports not only a range of power supply alternatives, but also a compound energy depository system. The objective of this paper is to describe the design of the system, the integrated intelligence and the power autonomy performance. Full article
(This article belongs to the Special Issue IoT Applications and Industry 4.0)
Show Figures

Figure 1

21 pages, 1320 KiB  
Article
Investigation of the Moderating Effect of Culture on Users’ Susceptibility to Persuasive Features in Fitness Applications
by Kiemute Oyibo and Julita Vassileva
Information 2019, 10(11), 344; https://doi.org/10.3390/info10110344 - 06 Nov 2019
Cited by 12 | Viewed by 3176
Abstract
Persuasive technologies have been identified as a potential motivational tool to tackle the rising problem of physical inactivity worldwide, with research showing they are more likely to be successful if tailored to the target audience. However, in the physical activity domain, there is [...] Read more.
Persuasive technologies have been identified as a potential motivational tool to tackle the rising problem of physical inactivity worldwide, with research showing they are more likely to be successful if tailored to the target audience. However, in the physical activity domain, there is limited research on how culture moderates users’ susceptibility to the various persuasive features employed in mobile health applications aimed to motivate behavior change. To bridge this gap, we conducted an empirical study among 256 participants from collectivist (n = 67) and individualist (n = 189) cultures to determine their culture-specific persuasion profiles with respect to six persuasive features commonly employed in fitness applications on the market. The persuasive features include two personal features (goal-setting/self-monitoring and reward) and four social features (competition, cooperation, social learning and social comparison). We based our study on the rating of storyboards (on which each of the six persuasive features is illustrated) and the ranking of the six persuasive features in terms of perceived persuasiveness. The results of our analysis showed that users from individualist and collectivist cultures significantly differ in their persuasion profiles. Based on our rating measure, collectivist users are more likely to be susceptible to all six persuasive features (personal and social) than individualist users, who are only likely to be susceptible to personal features. However, based on our ranking measure, individualist users are more likely to be susceptible to personal features (goal-setting/self-monitoring and reward) than collectivist users. In contrast, collectivist users are more likely to be susceptible to social features (cooperation and social learning) than individualist users. Based on these findings, we provide culture-specific persuasive technology design guidelines. Our study is the first to uncover the moderating effect of culture on users’ susceptibility to commonly employed persuasive features in fitness applications. Full article
(This article belongs to the Special Issue Personalizing Persuasive Technologies)
Show Figures

Figure 1

15 pages, 3317 KiB  
Article
Design of IoT-based Cyber–Physical Systems: A Driverless Bulldozer Prototype
by Nelson H. Carreras Guzman and Adam Gergo Mezovari
Information 2019, 10(11), 343; https://doi.org/10.3390/info10110343 - 05 Nov 2019
Cited by 10 | Viewed by 4173
Abstract
From autonomous vehicles to robotics and machinery, organizations are developing autonomous transportation systems in various domains. Strategic incentives point towards a fourth industrial revolution of cyber–physical systems with higher levels of automation and connectivity throughout the Internet of Things (IoT) that interact with [...] Read more.
From autonomous vehicles to robotics and machinery, organizations are developing autonomous transportation systems in various domains. Strategic incentives point towards a fourth industrial revolution of cyber–physical systems with higher levels of automation and connectivity throughout the Internet of Things (IoT) that interact with the physical world. In the construction and mining sectors, these developments are still at their infancy, and practitioners are interested in autonomous solutions to enhance efficiency and reliability. This paper illustrates the enhanced design of a driverless bulldozer prototype using IoT-based solutions for the remote control and navigation tracking of the mobile machinery. We illustrate the integration of a cloud application, communication protocols and a wireless communication network to control a small-scale bulldozer from a remote workstation. Furthermore, we explain a new tracking functionality of work completion using maps and georeferenced indicators available via a user interface. Finally, we provide a preliminary safety and security risk assessment of the system prototype and propose guidance for application in real-scale machinery. Full article
(This article belongs to the Special Issue IoT Applications and Industry 4.0)
Show Figures

Figure 1

31 pages, 2001 KiB  
Article
Role-Engineering Optimization with Cardinality Constraints and User-Oriented Mutually Exclusive Constraints
by Wei Sun, Hui Su and Hongbing Liu
Information 2019, 10(11), 342; https://doi.org/10.3390/info10110342 - 04 Nov 2019
Cited by 6 | Viewed by 2878
Abstract
Role-based access control (RBAC) is one of the most popular access-control mechanisms because of its convenience for management and various security policies, such as cardinality constraints, mutually exclusive constraints, and user-capability constraints. Role-engineering technology is an effective method to construct RBAC systems. However, [...] Read more.
Role-based access control (RBAC) is one of the most popular access-control mechanisms because of its convenience for management and various security policies, such as cardinality constraints, mutually exclusive constraints, and user-capability constraints. Role-engineering technology is an effective method to construct RBAC systems. However, mining scales are very large, and there are redundancies in the mining results. Furthermore, conventional role-engineering methods not only do not consider more than one cardinality constraint, but also cannot ensure authorization security. To address these issues, this paper proposes a novel method called role-engineering optimization with cardinality constraints and user-oriented mutually exclusive constraints (REO_CCUMEC). First, we convert the basic role mining into a clustering problem, based on the similarities between users and use-partitioning and compression technologies, in order to eliminate redundancies, while maintaining its usability for mining roles. Second, we present three role-optimization problems and the corresponding algorithms for satisfying single or double cardinality constraints. Third, in order to evaluate the performance of authorizations in a role-engineering system, the maximal role assignments are implemented, while satisfying multiple security constraints. The theoretical analyses and experiments demonstrate the accuracy, effectiveness, and efficiency of the proposed method. Full article
(This article belongs to the Section Information Systems)
Show Figures

Figure 1

22 pages, 7817 KiB  
Article
Fuzzy Reinforcement Learning and Curriculum Transfer Learning for Micromanagement in Multi-Robot Confrontation
by Chunyang Hu and Meng Xu
Information 2019, 10(11), 341; https://doi.org/10.3390/info10110341 - 02 Nov 2019
Cited by 5 | Viewed by 3471
Abstract
Multi-Robot Confrontation on physics-based simulators is a complex and time-consuming task, but simulators are required to evaluate the performance of the advanced algorithms. Recently, a few advanced algorithms have been able to produce considerably complex levels in the context of the robot confrontation [...] Read more.
Multi-Robot Confrontation on physics-based simulators is a complex and time-consuming task, but simulators are required to evaluate the performance of the advanced algorithms. Recently, a few advanced algorithms have been able to produce considerably complex levels in the context of the robot confrontation system when the agents are facing multiple opponents. Meanwhile, the current confrontation decision-making system suffers from difficulties in optimization and generalization. In this paper, a fuzzy reinforcement learning (RL) and the curriculum transfer learning are applied to the micromanagement for robot confrontation system. Firstly, an improved Q-learning in the semi-Markov decision-making process is designed to train the agent and an efficient RL model is defined to avoid the curse of dimensionality. Secondly, a multi-agent RL algorithm with parameter sharing is proposed to train the agents. We use a neural network with adaptive momentum acceleration as a function approximator to estimate the state-action function. Then, a method of fuzzy logic is used to regulate the learning rate of RL. Thirdly, a curriculum transfer learning method is used to extend the RL model to more difficult scenarios, which ensures the generalization of the decision-making system. The experimental results show that the proposed method is effective. Full article
Show Figures

Figure 1

15 pages, 974 KiB  
Article
Approximate Completed Trace Equivalence of ILAHSs Based on SAS Solving
by Honghui He, Jinzhao Wu and Juxia Xiong
Information 2019, 10(11), 340; https://doi.org/10.3390/info10110340 - 01 Nov 2019
Cited by 1 | Viewed by 1955
Abstract
The ILAHS (inhomogeneous linear algebraic hybrid system) is a kind of classic hybrid system. For the purpose of optimizing the design of ILAHS, one important strategy is to introduce equivalence to reduce the states. Recent advances in the hybrid system indicate that approximate [...] Read more.
The ILAHS (inhomogeneous linear algebraic hybrid system) is a kind of classic hybrid system. For the purpose of optimizing the design of ILAHS, one important strategy is to introduce equivalence to reduce the states. Recent advances in the hybrid system indicate that approximate trace equivalence can further simplify the design of ILAHS. To address this issue, the paper first introduces the trajectory metric d t r j for measuring the deviation of two hybrid systems’ behaviors. Given a deviation ε 0 , the original ILAHS of H 1 can be transformed to the approximate ILAHS of H 2 , then in trace equivalence semantics, H 2 is further reduced to H 3 with the same functions, and hence H 1 is ε -approximate trace equivalent to H 3 . In particular, ε = 0 is a traditional trace equivalence. We implement an approach based on RealRootClassification to determine the approximation between the ILAHSs. The paper also shows that the existing approaches are only special cases of our method. Finally, we illustrate the effectiveness and practicality of our method on an example. Full article
Show Figures

Figure 1

19 pages, 569 KiB  
Article
A Novel Multi-Attribute Decision Making Method Based on The Double Hierarchy Hesitant Fuzzy Linguistic Generalized Power Aggregation Operator
by Zhengmin Liu, Xiaolan Zhao, Lin Li, Xinya Wang and Di Wang
Information 2019, 10(11), 339; https://doi.org/10.3390/info10110339 - 30 Oct 2019
Cited by 17 | Viewed by 2488
Abstract
A double hierarchy hesitant fuzzy linguistic term set (DHHFLT) is deemed as an effective and powerful linguistic expression which models complex linguistic decision information more accurately by using two different hierarchy linguistic term sets. The purpose of this paper is to propose a [...] Read more.
A double hierarchy hesitant fuzzy linguistic term set (DHHFLT) is deemed as an effective and powerful linguistic expression which models complex linguistic decision information more accurately by using two different hierarchy linguistic term sets. The purpose of this paper is to propose a multi-attribute decision making method to tackle complex decision issues in which attribute values are represented as double hierarchy hesitant fuzzy linguistic numbers, and there are some extreme or unreasonable data in the attribute values. To do this, firstly, four double hierarchy hesitant fuzzy linguistic generalized power aggregation operators are introduced, including the double hierarchy hesitant fuzzy linguistic generalized power average (DHHFLGPA) operator, the double hierarchy hesitant fuzzy linguistic generalized power geometric (DHHFLGPG) operator, and their weighted forms. Thereafter, several favorable properties, as well as representative cases of the proposed operators, are investigated in detail. Moreover, by virtue of the proposed operators, a novel approach is developed for coping with multi-attribute decision making cases in the double hierarchy hesitant fuzzy linguistic context. Finally, an illustrated example is given to demonstrate the practical application of the presented approach, an availability verification is given to show its validity, and a comparative analysis is also conducted to highlight the advantages of the proposed approach. Full article
(This article belongs to the Section Information Theory and Methodology)
Show Figures

Figure 1

16 pages, 4649 KiB  
Article
Automatic Wireless Signal Classification: A Neural-Induced Support Vector Machine-Based Approach
by Arfan Haider Wahla, Lan Chen, Yali Wang and Rong Chen
Information 2019, 10(11), 338; https://doi.org/10.3390/info10110338 - 30 Oct 2019
Cited by 1 | Viewed by 2524
Abstract
Automatic Classification of Wireless Signals (ACWS), which is an intermediate step between signal detection and demodulation, is investigated in this paper. ACWS plays a crucial role in several military and non-military applications, by identifying interference sources and adversary attacks, to achieve efficient radio [...] Read more.
Automatic Classification of Wireless Signals (ACWS), which is an intermediate step between signal detection and demodulation, is investigated in this paper. ACWS plays a crucial role in several military and non-military applications, by identifying interference sources and adversary attacks, to achieve efficient radio spectrum management. The performance of traditional feature-based (FB) classification approaches is limited due to their specific input feature set, which in turn results in poor generalization under unknown conditions. Therefore, in this paper, a novel feature-based classifier Neural-Induced Support Vector Machine (NSVM) is proposed, in which the features are learned automatically from raw input signals using Convolutional Neural Networks (CNN). The output of NSVM is given by a Gaussian Support Vector Machine (SVM), which takes the features learned by CNN as its input. The proposed scheme NSVM is trained as a single architecture, and in this way, it learns to minimize a margin-based loss instead of cross-entropy loss. The proposed scheme NSVM outperforms the traditional softmax-based CNN modulation classifier by managing faster convergence of accuracy and loss curves during training. Furthermore, the robustness of the NSVM classifier is verified by extensive simulation experiments under the presence of several non-ideal real-world channel impairments over a range of signal-to-noise ratio (SNR) values. The performance of NSVM is remarkable in classifying wireless signals, such as at low signal-to-noise ratio (SNR), the overall averaged classification accuracy is > 97% at SNR = −2 dB and at higher SNR it achieves overall classification accuracy at > 99%, when SNR = 10 dB. In addition to that, the analytical comparison with other studies shows the performance of NSVM is superior over a range of settings. Full article
Show Figures

Figure 1

27 pages, 3667 KiB  
Review
A Botnets Circumspection: The Current Threat Landscape, and What We Know So Far
by Emmanuel C. Ogu, Olusegun A. Ojesanmi, Oludele Awodele and ‘Shade Kuyoro
Information 2019, 10(11), 337; https://doi.org/10.3390/info10110337 - 30 Oct 2019
Cited by 13 | Viewed by 12162
Abstract
Botnets have carved a niche in contemporary networking and cybersecurity due to the impact of their operations. The botnet threat continues to evolve and adapt to countermeasures as the security landscape continues to shift. As research efforts attempt to seek a deeper and [...] Read more.
Botnets have carved a niche in contemporary networking and cybersecurity due to the impact of their operations. The botnet threat continues to evolve and adapt to countermeasures as the security landscape continues to shift. As research efforts attempt to seek a deeper and robust understanding of the nature of the threat for more effective solutions, it becomes necessary to again traverse the threat landscape, and consolidate what is known so far about botnets, that future research directions could be more easily visualised. This research uses the general exploratory approach of the qualitative methodology to survey the current botnet threat landscape: Covering the typology of botnets and their owners, the structure and lifecycle of botnets, botnet attack modes and control architectures, existing countermeasure solutions and limitations, as well as the prospects of a botnet threat. The product is a consolidation of knowledge pertaining the nature of the botnet threat; which also informs future research directions into aspects of the threat landscape where work still needs to be done. Full article
(This article belongs to the Special Issue Botnets)
Show Figures

Figure 1

20 pages, 2691 KiB  
Article
Resource Allocation Combining Heuristic Matching and Particle Swarm Optimization Approaches: The Case of Downlink Non-Orthogonal Multiple Access
by Dimitrios Pliatsios and Panagiotis Sarigiannidis
Information 2019, 10(11), 336; https://doi.org/10.3390/info10110336 - 30 Oct 2019
Cited by 19 | Viewed by 3272
Abstract
The ever-increasing requirement of massive connectivity, due to the rapid deployment of internet of things (IoT) devices, in the emerging 5th generation (5G) mobile networks commands for even higher utilization of the available spectrum. Non-orthogonal multiple access (NOMA) is a promising solution that [...] Read more.
The ever-increasing requirement of massive connectivity, due to the rapid deployment of internet of things (IoT) devices, in the emerging 5th generation (5G) mobile networks commands for even higher utilization of the available spectrum. Non-orthogonal multiple access (NOMA) is a promising solution that can effectively accommodate a higher number of users, resulting in increased spectrum utilization. In this work, we aim to maximize the total throughput of a NOMA system, while maintaining a good level of fairness among the users. We propose a three-step method where the first step matches the users to the channels using a heuristic matching algorithm, while the second step utilizes the particle swarm optimization algorithm to allocate the power to each channel. In the third step, the power allocated to each channel is further distributed to the multiplexed users based on their respective channel gains. Based on extensive performance simulations, the proposed method offers notable improvement, e.g., 15% in terms of system throughput and 55% in terms of user fairness. Full article
(This article belongs to the Special Issue IoT Applications and Industry 4.0)
Show Figures

Figure 1

18 pages, 902 KiB  
Article
Studying Transaction Fees in the Bitcoin Blockchain with Probabilistic Logic Programming
by Damiano Azzolini, Fabrizio Riguzzi and Evelina Lamma
Information 2019, 10(11), 335; https://doi.org/10.3390/info10110335 - 30 Oct 2019
Cited by 18 | Viewed by 5863
Abstract
In Bitcoin, if a miner is able to solve a computationally hard problem called proof of work, it will receive an amount of bitcoin as a reward which is the sum of the fees for the transactions included in a block plus an [...] Read more.
In Bitcoin, if a miner is able to solve a computationally hard problem called proof of work, it will receive an amount of bitcoin as a reward which is the sum of the fees for the transactions included in a block plus an amount inversely proportional to the number of blocks discovered so far. At the moment of writing, the block reward is several orders of magnitude greater than the sum of transaction fees. Usually, miners try to collect the largest reward by including transactions associated with high fees. The main purpose of transaction fees is to prevent network spamming. However, they are also used to prioritize transactions. In order to use the minimum amount of fees, users usually have to find a compromise between fees and urgency of a transaction. In this paper, we develop a probabilistic logic model to experimentally analyze how fees affect confirmation time and miner’s revenue and to predict if an increase of average fees will generate a situation when the miner gets more reward by not following the protocol. Full article
(This article belongs to the Special Issue Blockchain and Smart Contract Technologies)
Show Figures

Figure 1

12 pages, 757 KiB  
Article
Relative Reduction of Neighborhood-Covering Pessimistic Multigranulation Rough Set Based on Evidence Theory
by Xiaoying You, Jinjin Li and Hongkun Wang
Information 2019, 10(11), 334; https://doi.org/10.3390/info10110334 - 29 Oct 2019
Cited by 13 | Viewed by 2055
Abstract
Relative reduction of multiple neighborhood-covering with multigranulation rough set has been one of the hot research topics in knowledge reduction theory. In this paper, we explore the relative reduction of covering information system by combining the neighborhood-covering pessimistic multigranulation rough set with evidence [...] Read more.
Relative reduction of multiple neighborhood-covering with multigranulation rough set has been one of the hot research topics in knowledge reduction theory. In this paper, we explore the relative reduction of covering information system by combining the neighborhood-covering pessimistic multigranulation rough set with evidence theory. First, the lower and upper approximations of multigranulation rough set in neighborhood-covering information systems are introduced based on the concept of neighborhood of objects. Second, the belief and plausibility functions from evidence theory are employed to characterize the approximations of neighborhood-covering multigranulation rough set. Then the relative reduction of neighborhood-covering information system is investigated by using the belief and plausibility functions. Finally, an algorithm for computing a relative reduction of neighborhood-covering pessimistic multigranulation rough set is proposed according to the significance of coverings defined by the belief function, and its validity is examined by a practical example. Full article
(This article belongs to the Section Information Theory and Methodology)
20 pages, 720 KiB  
Article
Internet of Things Infrastructure for Security and Safety in Public Places
by Angelos Chatzimichail, Christos Chatzigeorgiou, Athina Tsanousa, Dimos Ntioudis, Georgios Meditskos, Fotis Andritsopoulos, Christina Karaberi, Panagiotis Kasnesis, Dimitrios G. Kogias, Georgios Gorgogetas, Stefanos Vrochidis, Charalampos Patrikakis and Ioannis Kompatsiaris
Information 2019, 10(11), 333; https://doi.org/10.3390/info10110333 - 28 Oct 2019
Cited by 7 | Viewed by 3427
Abstract
We present the technologies and the theoretical background of an intelligent interconnected infrastructure for public security and safety. The innovation of the framework lies in the intelligent combination of devices and human information towards human and situational awareness, so as to provide a [...] Read more.
We present the technologies and the theoretical background of an intelligent interconnected infrastructure for public security and safety. The innovation of the framework lies in the intelligent combination of devices and human information towards human and situational awareness, so as to provide a protection and security environment for citizens. The framework is currently being used to support visitors in public spaces and events, by creating the appropriate infrastructure to address a set of urgent situations, such as health-related problems and missing children in overcrowded environments, supporting smart links between humans and entities on the basis of goals, and adapting device operation to comply with human objectives, profiles, and privacy. State-of-the-art technologies in the domain of IoT data collection and analytics are combined with localization techniques, ontologies, reasoning mechanisms, and data aggregation in order to acquire a better understanding of the ongoing situation and inform the necessary people and devices to act accordingly. Finally, we present the first results of people localization and the platforms’ ontology and representation framework. Full article
(This article belongs to the Special Issue IoT Applications and Industry 4.0)
Show Figures

Figure 1

22 pages, 1788 KiB  
Article
The Construction of the Past: Towards a Theory for Knowing the Past
by Kenneth Thibodeau
Information 2019, 10(11), 332; https://doi.org/10.3390/info10110332 - 28 Oct 2019
Cited by 11 | Viewed by 3623
Abstract
This paper presents Constructed Past Theory, an epistemological theory about how we come to know things that happened or existed in the past. The theory is expounded both in text and in a formal model comprising UML class diagrams. The ideas presented here [...] Read more.
This paper presents Constructed Past Theory, an epistemological theory about how we come to know things that happened or existed in the past. The theory is expounded both in text and in a formal model comprising UML class diagrams. The ideas presented here have been developed in a half century of experience as a practitioner in the management of information and automated systems in the US government and as a researcher in several collaborations, notably the four international and multidisciplinary InterPARES projects. This work is part of a broader initiative, providing a conceptual framework for reformulating the concepts and theories of archival science in order to enable a new discipline whose assertions are empirically and, wherever possible, quantitatively testable. The new discipline, called archival engineering, is intended to provide an appropriate, coherent foundation for the development of systems and applications for managing, preserving and providing access to digital information, development which is necessitated by the exponential growth and explosive diversification of data recorded in digital form and the use of digital data in an ever increasing variety of domains. Both the text and model are an initial exposition of the theory that both requires and invites further development. Full article
(This article belongs to the Special Issue Big Data Research, Development, and Applications––Big Data 2018)
Show Figures

Figure 1

14 pages, 4248 KiB  
Article
Wideband Spectrum Sensing Method Based on Channels Clustering and Hidden Markov Model Prediction
by Huan Wang, Bin Wu, Yuancheng Yao and Mingwei Qin
Information 2019, 10(11), 331; https://doi.org/10.3390/info10110331 - 25 Oct 2019
Cited by 4 | Viewed by 2209
Abstract
Spectrum sensing is the necessary premise for implementing cognitive radio technology. The conventional wideband spectrum sensing methods mainly work with sweeping frequency and still face major challenges in performance and efficiency. This paper introduces a new wideband spectrum sensing method based on channels [...] Read more.
Spectrum sensing is the necessary premise for implementing cognitive radio technology. The conventional wideband spectrum sensing methods mainly work with sweeping frequency and still face major challenges in performance and efficiency. This paper introduces a new wideband spectrum sensing method based on channels clustering and prediction. This method counts on the division of the wideband spectrum into uniform sub-channels, and employs a density-based clustering algorithm called Ordering Points to Identify Clustering Structure (OPTICS) to cluster the channels in view of the correlation between the channels. The detection channel (DC) is selected and detected for each cluster, and states of other channels (estimated channels, ECs) in the cluster are then predicted with Hidden Markov Model (HMM), so that all channels states of the wideband spectrum are finally obtained. The simulation results show that the proposed method could effectively improve the wideband spectrum sensing performance. Full article
(This article belongs to the Section Information and Communications Technology)
Show Figures

Figure 1

19 pages, 3191 KiB  
Article
Analysis of Data Persistence in Collaborative Content Creation Systems: The Wikipedia Case
by Lorenzo Bracciale, Pierpaolo Loreti, Andrea Detti and Nicola Blefari Melazzi
Information 2019, 10(11), 330; https://doi.org/10.3390/info10110330 - 25 Oct 2019
Cited by 1 | Viewed by 2766
Abstract
A very common problem in designing caching/prefetching systems, distribution networks, search engines, and web-crawlers is determining how long a given content lasts before being updated, i.e., its update frequency. Indeed, while some content is not frequently updated (e.g., videos), in other cases revisions [...] Read more.
A very common problem in designing caching/prefetching systems, distribution networks, search engines, and web-crawlers is determining how long a given content lasts before being updated, i.e., its update frequency. Indeed, while some content is not frequently updated (e.g., videos), in other cases revisions periodically invalidate contents. In this work, we present an analysis of Wikipedia, currently the 5th most visited website in the world, evaluating the statistics of updates of its pages and their relationship with page view statistics. We discovered that the number of updates of a page follows a lognormal distribution. We provide fitting parameters as well as a goodness of fit analysis, showing the statistical significance of the model to describe the empirical data. We perform an analysis of the views–updates relationship, showing that in a time period of a month, there is a lack of evident correlation between the most updated pages and the most viewed pages. However, observing specific pages, we show that there is a strong correlation between the peaks of views and updates, and we find that in more than 50% of cases, the time difference between the two peaks is less than a week. This reflects the underlying process whereby an event causes both an update and a visit peak that occurs with different time delays. This behavior can pave the way for predictive traffic analysis applications based on content update statistics. Finally, we show how the model can be used to evaluate the performance of an in-network caching scenario. Full article
(This article belongs to the Special Issue Knowledge Discovery on the Web)
Show Figures

Figure 1

Previous Issue
Next Issue
Back to TopTop