Next Issue
Volume 10, December
Previous Issue
Volume 10, June
 
 

Informatics, Volume 10, Issue 3 (September 2023) – 21 articles

Cover Story (view full-size image): The study evaluates the ability of the GAME2AWE exergame platform to assess seniors' motor and cognitive conditions using in-game performance and behavior data. For example, a senior's reaction time to game stimuli could indicate cognitive speed, while movement precision might reveal information about motor skills. The research focuses on creating machine learning models using in-game data to efficiently estimate the motor and cognitive states of the elderly. Although several machine learning techniques were employed, the Random Forest model surpassed others, achieving classification accuracies from 93.6% for cognitive screening to 95.6% for motor assessment. This could be a crucial tool for health professionals, allowing them to remotely identify individuals requiring assistance or intervention. View this paper
  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Reader to open them.
Order results
Result details
Section
Select all
Export citation of selected articles as:
38 pages, 480 KiB  
Article
Conceptualization and Survey Instrument Development for Website Usability
Informatics 2023, 10(3), 75; https://doi.org/10.3390/informatics10030075 - 20 Sep 2023
Viewed by 1699
Abstract
The aim of this study is to conceptualize website usability and develop a survey instrument to measure related concepts from the perspective of end users. We designed a three-stage methodology. First, concepts related to website usability were derived using content analysis technique. A [...] Read more.
The aim of this study is to conceptualize website usability and develop a survey instrument to measure related concepts from the perspective of end users. We designed a three-stage methodology. First, concepts related to website usability were derived using content analysis technique. A total of 16 constructs measuring website usability were defined with their explanations and corresponding open codes. Second, a survey instrument was developed according to the defined open codes and the literature. The instrument was first validated using face validity, pilot testing (n = 30), and content validity (n = 40). Third, the survey instrument was validated using explanatory and confirmatory analyses. In the explanatory analysis, 785 questionnaires were collected from e-commerce website users to validate the factor structure of website usability. For confirmatory factor analysis, a new sample collected from 1086 users of e-commerce websites was used to confirm the measurement model. In addition, nomological validation was conducted by analyzing the effect of website usability concepts on three key factors: “continued intention to use”, “satisfaction”, and “brand loyalty”. Full article
(This article belongs to the Section Human-Computer Interaction)
22 pages, 1167 KiB  
Article
Reinforcement Learning in Education: A Literature Review
Informatics 2023, 10(3), 74; https://doi.org/10.3390/informatics10030074 - 18 Sep 2023
Viewed by 4283
Abstract
The utilization of reinforcement learning (RL) within the field of education holds the potential to bring about a significant shift in the way students approach and engage with learning and how teachers evaluate student progress. The use of RL in education allows for [...] Read more.
The utilization of reinforcement learning (RL) within the field of education holds the potential to bring about a significant shift in the way students approach and engage with learning and how teachers evaluate student progress. The use of RL in education allows for personalized and adaptive learning, where the difficulty level can be adjusted based on a student’s performance. As a result, this could result in heightened levels of motivation and engagement among students. The aim of this article is to investigate the applications and techniques of RL in education and determine its potential impact on enhancing educational outcomes. It compares the various policies induced by RL with baselines and identifies four distinct RL techniques: the Markov decision process, partially observable Markov decision process, deep RL network, and Markov chain, as well as their application in education. The main focus of the article is to identify best practices for incorporating RL into educational settings to achieve effective and rewarding outcomes. To accomplish this, the article thoroughly examines the existing literature on using RL in education and its potential to advance educational technology. This work provides a thorough analysis of the various techniques and applications of RL in education to answer questions related to the effectiveness of RL in education and its future prospects. The findings of this study will provide researchers with a benchmark to compare the usefulness and effectiveness of commonly employed RL algorithms and provide direction for future research in education. Full article
(This article belongs to the Section Machine Learning)
Show Figures

Figure 1

24 pages, 3708 KiB  
Article
Research Trends in the Use of Machine Learning Applied in Mobile Networks: A Bibliometric Approach and Research Agenda
Informatics 2023, 10(3), 73; https://doi.org/10.3390/informatics10030073 - 09 Sep 2023
Cited by 1 | Viewed by 1633
Abstract
This article aims to examine the research trends in the development of mobile networks from machine learning. The methodological approach starts from an analysis of 260 academic documents selected from the Scopus and Web of Science databases and is based on the parameters [...] Read more.
This article aims to examine the research trends in the development of mobile networks from machine learning. The methodological approach starts from an analysis of 260 academic documents selected from the Scopus and Web of Science databases and is based on the parameters of the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) statement. Quantity, quality and structure indicators are calculated in order to contextualize the documents’ thematic evolution. The results reveal that, in relation to the publications by country, the United States and China, who are competing for fifth generation (5G) network coverage and are responsible for manufacturing devices for mobile networks, stand out. Most of the research on the subject focuses on the optimization of resources and traffic to guarantee the best management and availability of a network due to the high demand for resources and greater amount of traffic generated by the many Internet of Things (IoT) devices that are being developed for the market. It is concluded that thematic trends focus on generating algorithms for recognizing and learning the data in the network and on trained models that draw from the available data to improve the experience of connecting to mobile networks. Full article
(This article belongs to the Section Machine Learning)
Show Figures

Figure 1

20 pages, 1739 KiB  
Article
Sp2PS: Pruning Score by Spectral and Spatial Evaluation of CAM Images
Informatics 2023, 10(3), 72; https://doi.org/10.3390/informatics10030072 - 04 Sep 2023
Viewed by 1008
Abstract
CNN models can have millions of parameters, which makes them unattractive for some applications that require fast inference times or small memory footprints. To overcome this problem, one alternative is to identify and remove weights that have a small impact on the loss [...] Read more.
CNN models can have millions of parameters, which makes them unattractive for some applications that require fast inference times or small memory footprints. To overcome this problem, one alternative is to identify and remove weights that have a small impact on the loss function of the algorithm, which is known as pruning. Typically, pruning methods are compared in terms of performance (e.g., accuracy), model size and inference speed. However, it is unusual to evaluate whether a pruned model preserves regions of importance in an image when performing inference. Consequently, we propose a metric to assess the impact of a pruning method based on images obtained by model interpretation (specifically, class activation maps). These images are spatially and spectrally compared and integrated by the harmonic mean for all samples in the test dataset. The results show that although the accuracy in a pruned model may remain relatively constant, the areas of attention for decision making are not necessarily preserved. Furthermore, the performance of pruning methods can be easily compared as a function of the proposed metric. Full article
(This article belongs to the Section Machine Learning)
Show Figures

Figure 1

24 pages, 20313 KiB  
Review
A Comprehensive Analysis of the Worst Cybersecurity Vulnerabilities in Latin America
Informatics 2023, 10(3), 71; https://doi.org/10.3390/informatics10030071 - 31 Aug 2023
Viewed by 1695
Abstract
Vulnerabilities in cyber defense in the countries of the Latin American region have favored the activities of cybercriminals from different parts of the world who have carried out a growing number of cyberattacks that affect public and private services and compromise the integrity [...] Read more.
Vulnerabilities in cyber defense in the countries of the Latin American region have favored the activities of cybercriminals from different parts of the world who have carried out a growing number of cyberattacks that affect public and private services and compromise the integrity of users and organizations. This article describes the most representative vulnerabilities related to cyberattacks that have affected different sectors of countries in the Latin American region. A systematic review of repositories and the scientific literature was conducted, considering journal articles, conference proceedings, and reports from official bodies and leading brands of cybersecurity systems. The cybersecurity vulnerabilities identified in the countries of the Latin American region are low cybersecurity awareness, lack of standards and regulations, use of outdated software, security gaps in critical infrastructure, and lack of training and professional specialization. Full article
Show Figures

Figure 1

14 pages, 281 KiB  
Essay
What Is It Like to Make a Prototype? Practitioner Reflections on the Intersection of User Experience and Digital Humanities/Social Sciences during the Design and Delivery of the “Getting to Mount Resilience” Prototype
Informatics 2023, 10(3), 70; https://doi.org/10.3390/informatics10030070 - 28 Aug 2023
Viewed by 1107
Abstract
The digital humanities and social sciences are critical for addressing societal challenges such as climate change and disaster risk reduction. One way in which the digital humanities and social sciences add value, particularly in an increasingly digitised society, is by engaging different communities [...] Read more.
The digital humanities and social sciences are critical for addressing societal challenges such as climate change and disaster risk reduction. One way in which the digital humanities and social sciences add value, particularly in an increasingly digitised society, is by engaging different communities through digital services and products. Alongside this observation, the field of user experience (UX) has also become popular in industrial settings. UX specifically concerns designing and developing digital products and solutions, and, while it is popular in business and other academic domains, there is disquiet in the digital humanities/social sciences towards UX and a general lack of engagement. This paper shares the reflections and insights of a digital humanities/social science practitioner working on a UX project to build a prototype demonstrator for disaster risk reduction. Insights come from formal developmental and participatory evaluation activities, as well as qualitative self-reflection. The paper identifies lessons learnt, noting challenges experienced—including feelings of uncertainty and platform dependency—and reflects on the hesitancy practitioners may have and potential barriers in participation between UX and the digital humanities/social science. It concludes that digital humanities/social science practitioners have few skill barriers and offer a valued perspective, but unclear opportunities for critical engagement may present a barrier. Full article
(This article belongs to the Section Social Informatics and Digital Humanities)
26 pages, 691 KiB  
Article
Theoretical Models for Acceptance of Human Implantable Technologies: A Narrative Review
Informatics 2023, 10(3), 69; https://doi.org/10.3390/informatics10030069 - 26 Aug 2023
Viewed by 1694
Abstract
Theoretical models play a vital role in understanding the barriers and facilitators for the acceptance or rejection of emerging technologies. We conducted a narrative review of theoretical models predicting acceptance and adoption of human enhancement embeddable technologies to assess how well those models [...] Read more.
Theoretical models play a vital role in understanding the barriers and facilitators for the acceptance or rejection of emerging technologies. We conducted a narrative review of theoretical models predicting acceptance and adoption of human enhancement embeddable technologies to assess how well those models have studied unique attributes and qualities of embeddables and to identify gaps in the literature. Our broad search across multiple databases and Google Scholar identified 16 relevant articles published since 2016. We discovered that three main theoretical models: the technology acceptance model (TAM), unified theory of acceptance and use of technology (UTAUT), and cognitive–affective–normative (CAN) model have been consistently used and refined to explain the acceptance of human enhancement embeddable technology. Psychological constructs such as self-efficacy, motivation, self-determination, and demographic factors were also explored as mediating and moderating variables. Based on our analysis, we collated the verified determinants into a comprehensive model, modifying the CAN model. We also identified gaps in the literature and recommended a further exploration of design elements and psychological constructs. Additionally, we suggest investigating other models such as the matching person and technology model (MPTM), the hedonic-motivation system adoption model (HMSAM), and the value-based adoption model (VAM) to provide a more nuanced understanding of embeddable technologies’ adoption. Our study not only synthesizes the current state of research but also provides a robust framework for future investigations. By offering insights into the complex interplay of factors influencing the adoption of embeddable technologies, we contribute to the development of more effective strategies for design, implementation, and acceptance, thereby paving the way for the successful integration of these technologies into everyday life. Full article
(This article belongs to the Section Human-Computer Interaction)
Show Figures

Figure 1

24 pages, 1901 KiB  
Article
A Machine Learning Python-Based Search Engine Optimization Audit Software
Informatics 2023, 10(3), 68; https://doi.org/10.3390/informatics10030068 - 25 Aug 2023
Viewed by 2226
Abstract
In the present-day digital landscape, websites have increasingly relied on digital marketing practices, notably search engine optimization (SEO), as a vital component in promoting sustainable growth. The traffic a website receives directly determines its development and success. As such, website owners frequently engage [...] Read more.
In the present-day digital landscape, websites have increasingly relied on digital marketing practices, notably search engine optimization (SEO), as a vital component in promoting sustainable growth. The traffic a website receives directly determines its development and success. As such, website owners frequently engage the services of SEO experts to enhance their website’s visibility and increase traffic. These specialists employ premium SEO audit tools that crawl the website’s source code to identify structural changes necessary to comply with specific ranking criteria, commonly called SEO factors. Working collaboratively with developers, SEO specialists implement technical changes to the source code and await the results. The cost of purchasing premium SEO audit tools or hiring an SEO specialist typically ranges in the thousands of dollars per year. Against this backdrop, this research endeavors to provide an open-source Python-based Machine Learning SEO software tool to the general public, catering to the needs of both website owners and SEO specialists. The tool analyzes the top-ranking websites for a given search term, assessing their on-page and off-page SEO strategies, and provides recommendations to enhance a website’s performance to surpass its competition. The tool yields remarkable results, boosting average daily organic traffic from 10 to 143 visitors. Full article
(This article belongs to the Topic Software Engineering and Applications)
Show Figures

Figure 1

31 pages, 2980 KiB  
Article
A Proposed Artificial Intelligence Model for Android-Malware Detection
Informatics 2023, 10(3), 67; https://doi.org/10.3390/informatics10030067 - 18 Aug 2023
Viewed by 1580
Abstract
There are a variety of reasons why smartphones have grown so pervasive in our daily lives. While their benefits are undeniable, Android users must be vigilant against malicious apps. The goal of this study was to develop a broad framework for detecting Android [...] Read more.
There are a variety of reasons why smartphones have grown so pervasive in our daily lives. While their benefits are undeniable, Android users must be vigilant against malicious apps. The goal of this study was to develop a broad framework for detecting Android malware using multiple deep learning classifiers; this framework was given the name DroidMDetection. To provide precise, dynamic, Android malware detection and clustering of different families of malware, the framework makes use of unique methodologies built based on deep learning and natural language processing (NLP) techniques. When compared to other similar works, DroidMDetection (1) uses API calls and intents in addition to the common permissions to accomplish broad malware analysis, (2) uses digests of features in which a deep auto-encoder generates to cluster the detected malware samples into malware family groups, and (3) benefits from both methods of feature extraction and selection. Numerous reference datasets were used to conduct in-depth analyses of the framework. DroidMDetection’s detection rate was high, and the created clusters were relatively consistent, no matter the evaluation parameters. DroidMDetection surpasses state-of-the-art solutions MaMaDroid, DroidMalwareDetector, MalDozer, and DroidAPIMiner across all metrics we used to measure their effectiveness. Full article
Show Figures

Figure 1

18 pages, 1329 KiB  
Article
Analysis of Factors Associated with Highway Personal Car and Truck Run-Off-Road Crashes: Decision Tree and Mixed Logit Model with Heterogeneity in Means and Variances Approaches
Informatics 2023, 10(3), 66; https://doi.org/10.3390/informatics10030066 - 18 Aug 2023
Viewed by 1123
Abstract
Among several approaches to analyzing crash research, the use of machine learning and econometric analysis has found potential in the analysis. This study aims to empirically examine factors influencing the single-vehicle crash for personal cars and trucks using decision trees (DT) and mixed [...] Read more.
Among several approaches to analyzing crash research, the use of machine learning and econometric analysis has found potential in the analysis. This study aims to empirically examine factors influencing the single-vehicle crash for personal cars and trucks using decision trees (DT) and mixed binary logit with heterogeneity in means and variances (RPBLHMV) and compare model accuracy. The data in this study were obtained from the Department of Highway during 2011–2017, and the results indicated that the RPBLHMV was superior due to its higher overall prediction accuracy, sensitivity, and specificity values when compared to the DT model. According to the RPBLHMV results, car models showed that injury severity was associated with driver gender, seat belt, mount the island, defect equipment, and safety equipment. For the truck model, it was found that crashes located at intersections or medians, mounts on the island, and safety equipment have a significant influence on injury severity. DT results also showed that running off-road and hitting safety equipment can reduce the risk of death for car and truck drivers. This finding can illustrate the difference causing the dependent variable in each model. The RPBLHMV showed the ability to capture random parameters and unobserved heterogeneity. But DT can be easily used to provide variable importance and show which factor has the most significance by sequencing. Each model has advantages and disadvantages. The study findings can give relevant authorities choices for measures and policy improvement based on two analysis methods in accordance with their policy design. Therefore, whether advocating road safety or improving policy measures, the use of appropriate methods can increase operational efficiency. Full article
(This article belongs to the Special Issue Feature Papers in Big Data)
Show Figures

Figure 1

15 pages, 2586 KiB  
Article
Exploring How Healthcare Organizations Use Twitter: A Discourse Analysis
Informatics 2023, 10(3), 65; https://doi.org/10.3390/informatics10030065 - 08 Aug 2023
Viewed by 1372
Abstract
The use of Twitter by healthcare organizations is an effective means of disseminating medical information to the public. However, the content of tweets can be influenced by various factors, such as health emergencies and medical breakthroughs. In this study, we conducted a discourse [...] Read more.
The use of Twitter by healthcare organizations is an effective means of disseminating medical information to the public. However, the content of tweets can be influenced by various factors, such as health emergencies and medical breakthroughs. In this study, we conducted a discourse analysis to better understand how public and private healthcare organizations use Twitter and the factors that influence the content of their tweets. Data were collected from the Twitter accounts of five private pharmaceutical companies, two US and two Canadian public health agencies, and the World Health Organization from 1 January 2020, to 31 December 2022. The study applied topic modeling and association rule mining to identify text patterns that influence the content of tweets across different Twitter accounts. The findings revealed that building a reputation on Twitter goes beyond just evaluating the popularity of a tweet in the online sphere. Topic modeling, when applied synchronously with hashtag and tagging analysis can provide an increase in tweet popularity. Additionally, the study showed differences in language use and style across the Twitter accounts’ categories and discussed how the impact of popular association rules could translate to significantly more user engagement. Overall, the results of this study provide insights into natural language processing for health literacy and present a way for organizations to structure their future content to ensure maximum public engagement. Full article
Show Figures

Figure 1

14 pages, 2155 KiB  
Article
Reinforcement Learning for Reducing the Interruptions and Increasing Fault Tolerance in the Cloud Environment
Informatics 2023, 10(3), 64; https://doi.org/10.3390/informatics10030064 - 02 Aug 2023
Viewed by 1061
Abstract
Cloud computing delivers robust computational services by processing tasks on its virtual machines (VMs) using resource-scheduling algorithms. The cloud’s existing algorithms provide limited results due to inappropriate resource scheduling. Additionally, these algorithms cannot process tasks generating faults while being computed. The primary reason [...] Read more.
Cloud computing delivers robust computational services by processing tasks on its virtual machines (VMs) using resource-scheduling algorithms. The cloud’s existing algorithms provide limited results due to inappropriate resource scheduling. Additionally, these algorithms cannot process tasks generating faults while being computed. The primary reason for this is that these existing algorithms need an intelligence mechanism to enhance their abilities. To provide an intelligence mechanism to improve the resource-scheduling process and provision the fault-tolerance mechanism, an algorithm named reinforcement learning-shortest job first (RL-SJF) has been implemented by integrating the RL technique with the existing SJF algorithm. An experiment was conducted in a simulation platform to compare the working of RL-SJF with SJF, and challenging tasks were computed in multiple scenarios. The experimental results convey that the RL-SJF algorithm enhances the resource-scheduling process by improving the aggregate cost by 14.88% compared to the SJF algorithm. Additionally, the RL-SJF algorithm provided a fault-tolerance mechanism by computing 55.52% of the total tasks compared to 11.11% of the SJF algorithm. Thus, the RL-SJF algorithm improves the overall cloud performance and provides the ideal quality of service (QoS). Full article
(This article belongs to the Topic Theory and Applications of High Performance Computing)
Show Figures

Figure 1

30 pages, 6660 KiB  
Article
Finding Good Attribute Subsets for Improved Decision Trees Using a Genetic Algorithm Wrapper; a Supervised Learning Application in the Food Business Sector for Wine Type Classification
Informatics 2023, 10(3), 63; https://doi.org/10.3390/informatics10030063 - 21 Jul 2023
Viewed by 1130
Abstract
This study aims to provide a method that will assist decision makers in managing large datasets, eliminating the decision risk and highlighting significant subsets of data with certain weight. Thus, binary decision tree (BDT) and genetic algorithm (GA) methods are combined using a [...] Read more.
This study aims to provide a method that will assist decision makers in managing large datasets, eliminating the decision risk and highlighting significant subsets of data with certain weight. Thus, binary decision tree (BDT) and genetic algorithm (GA) methods are combined using a wrapping technique. The BDT algorithm is used to classify data in a tree structure, while the GA is used to identify the best attribute combinations from a set of possible combinations, referred to as generations. The study seeks to address the problem of overfitting that may occur when classifying large datasets by reducing the number of attributes used in classification. Using the GA, the number of selected attributes is minimized, reducing the risk of overfitting. The algorithm produces many attribute sets that are classified using the BDT algorithm and are assigned a fitness number based on their accuracy. The fittest set of attributes, or chromosomes, as well as the BDTs, are then selected for further analysis. The training process uses the data of a chemical analysis of wines grown in the same region but derived from three different cultivars. The results demonstrate the effectiveness of this innovative approach in defining certain ingredients and weights of wine’s origin. Full article
(This article belongs to the Special Issue Applications of Machine Learning and Deep Learning in Agriculture)
Show Figures

Figure 1

18 pages, 1482 KiB  
Article
Biologically Plausible Boltzmann Machine
Informatics 2023, 10(3), 62; https://doi.org/10.3390/informatics10030062 - 14 Jul 2023
Viewed by 1081
Abstract
The dichotomy in power consumption between digital and biological information processing systems is an intriguing open question related at its core with the necessity for a more thorough understanding of the thermodynamics of the logic of computing. To contribute in this regard, we [...] Read more.
The dichotomy in power consumption between digital and biological information processing systems is an intriguing open question related at its core with the necessity for a more thorough understanding of the thermodynamics of the logic of computing. To contribute in this regard, we put forward a model that implements the Boltzmann machine (BM) approach to computation through an electric substrate under thermal fluctuations and dissipation. The resulting network has precisely defined statistical properties, which are consistent with the data that are accessible to the BM. It is shown that by the proposed model, it is possible to design neural-inspired logic gates capable of universal Turing computation under similar thermal conditions to those found in biological neural networks and with information processing and storage electric potentials at comparable scales. Full article
(This article belongs to the Section Machine Learning)
Show Figures

Figure 1

16 pages, 370 KiB  
Article
Poverty Traps in Online Knowledge-Based Peer-Production Communities
Informatics 2023, 10(3), 61; https://doi.org/10.3390/informatics10030061 - 13 Jul 2023
Viewed by 978
Abstract
Online knowledge-based peer-production communities, like question and answer sites (Q&A), often rely on gamification, e.g., through reputation points, to incentivize users to contribute frequently and effectively. These gamification techniques are important for achieving the critical mass that sustains a community and enticing new [...] Read more.
Online knowledge-based peer-production communities, like question and answer sites (Q&A), often rely on gamification, e.g., through reputation points, to incentivize users to contribute frequently and effectively. These gamification techniques are important for achieving the critical mass that sustains a community and enticing new users to join. However, aging communities tend to build “poverty traps” that act as barriers for new users. In this paper, we present our investigation of 32 domain communities from Stack Exchange and our analysis of how different subjects impact the development of early user advantage. Our results raise important questions about the accessibility of knowledge-based peer-production communities. We consider the analysis results in the context of changing information needs and the relevance of Q&A in the future. Our findings inform policy design for building more equitable knowledge-based peer-production communities and increasing the accessibility to existing ones. Full article
(This article belongs to the Section Human-Computer Interaction)
Show Figures

Figure 1

28 pages, 689 KiB  
Article
Towards a Universal Privacy Model for Electronic Health Record Systems: An Ontology and Machine Learning Approach
Informatics 2023, 10(3), 60; https://doi.org/10.3390/informatics10030060 - 11 Jul 2023
Cited by 2 | Viewed by 1998
Abstract
This paper proposed a novel privacy model for Electronic Health Records (EHR) systems utilizing a conceptual privacy ontology and Machine Learning (ML) methodologies. It underscores the challenges currently faced by EHR systems such as balancing privacy and accessibility, user-friendliness, and legal compliance. To [...] Read more.
This paper proposed a novel privacy model for Electronic Health Records (EHR) systems utilizing a conceptual privacy ontology and Machine Learning (ML) methodologies. It underscores the challenges currently faced by EHR systems such as balancing privacy and accessibility, user-friendliness, and legal compliance. To address these challenges, the study developed a universal privacy model designed to efficiently manage and share patients’ personal and sensitive data across different platforms, such as MHR and NHS systems. The research employed various BERT techniques to differentiate between legitimate and illegitimate privacy policies. Among them, Distil BERT emerged as the most accurate, demonstrating the potential of our ML-based approach to effectively identify inadequate privacy policies. This paper outlines future research directions, emphasizing the need for comprehensive evaluations, testing in real-world case studies, the investigation of adaptive frameworks, ethical implications, and fostering stakeholder collaboration. This research offers a pioneering approach towards enhancing healthcare information privacy, providing an innovative foundation for future work in this field. Full article
Show Figures

Figure 1

23 pages, 4537 KiB  
Article
A Machine-Learning-Based Motor and Cognitive Assessment Tool Using In-Game Data from the GAME2AWE Platform
Informatics 2023, 10(3), 59; https://doi.org/10.3390/informatics10030059 - 09 Jul 2023
Viewed by 1400
Abstract
With age, a decline in motor and cognitive functionality is inevitable, and it greatly affects the quality of life of the elderly and their ability to live independently. Early detection of these types of decline can enable timely interventions and support for maintaining [...] Read more.
With age, a decline in motor and cognitive functionality is inevitable, and it greatly affects the quality of life of the elderly and their ability to live independently. Early detection of these types of decline can enable timely interventions and support for maintaining functional independence and improving overall well-being. This paper explores the potential of the GAME2AWE platform in assessing the motor and cognitive condition of seniors based on their in-game performance data. The proposed methodology involves developing machine learning models to explore the predictive power of features that are derived from the data collected during gameplay on the GAME2AWE platform. Through a study involving fifteen elderly participants, we demonstrate that utilizing in-game data can achieve a high classification performance when predicting the motor and cognitive states. Various machine learning techniques were used but Random Forest outperformed the other models, achieving a classification accuracy ranging from 93.6% for cognitive screening to 95.6% for motor assessment. These results highlight the potential of using exergames within a technology-rich environment as an effective means of capturing the health status of seniors. This approach opens up new possibilities for objective and non-invasive health assessment, facilitating early detections and interventions to improve the well-being of seniors. Full article
(This article belongs to the Special Issue Feature Papers in Medical and Clinical Informatics)
Show Figures

Graphical abstract

13 pages, 329 KiB  
Article
Digital Citizenship and the Big Five Personality Traits
Informatics 2023, 10(3), 58; https://doi.org/10.3390/informatics10030058 - 07 Jul 2023
Viewed by 1209
Abstract
Over the past two decades, the internet has become an increasingly important venue for political expression, community building, and social activism. Scholars in a wide range of disciplines have endeavored to understand and measure how these transformations have affected individuals’ civic attitudes and [...] Read more.
Over the past two decades, the internet has become an increasingly important venue for political expression, community building, and social activism. Scholars in a wide range of disciplines have endeavored to understand and measure how these transformations have affected individuals’ civic attitudes and behaviors. The Digital Citizenship Scale (original and revised form) has become one of the most widely used instruments for measuring and evaluating these changes, but to date, no study has investigated how digital citizenship behaviors relate to exogenous variables. Using the classic Big Five Factor model of personality (Openness to experience, Conscientiousness, Extroversion, Agreeableness, and Neuroticism), this study investigated how personality traits relate to the key components of digital citizenship. Survey results were gathered across three countries (n = 1820), and analysis revealed that personality traits map uniquely on to digital citizenship in comparison to traditional forms of civic engagement. The implications of these findings are discussed. Full article
(This article belongs to the Section Human-Computer Interaction)
Show Figures

Figure 1

20 pages, 468 KiB  
Article
Information and Communication Technologies in Primary Education: Teachers’ Perceptions in Greece
Informatics 2023, 10(3), 57; https://doi.org/10.3390/informatics10030057 - 07 Jul 2023
Cited by 1 | Viewed by 1152
Abstract
Innovative learning methods including the increasing use of Information and Communication Technologies (ICT) applications are transforming the contemporary educational process. Teachers’ perceptions of ICT, self-efficacy on computers and demographics are some of the factors that have been found to impact the use of [...] Read more.
Innovative learning methods including the increasing use of Information and Communication Technologies (ICT) applications are transforming the contemporary educational process. Teachers’ perceptions of ICT, self-efficacy on computers and demographics are some of the factors that have been found to impact the use of ICT in the educational process. The aim of the present research is to analyze the perceptions of primary school teachers about ICT and how they affect their use in the educational process, through the case of Greece. To do so, primary research was carried out. Data from 285 valid questionnaires were statistically analyzed using descriptive statistics, principal components analysis, correlation and regression analysis. The main results were in accordance with the relevant literature, indicating the impact of teachers’ self-efficacy, perceptions and demographics on ICT use in the educational process. These results provide useful insights for the achievement of a successful implementation of ICT in education. Full article
(This article belongs to the Section Human-Computer Interaction)
Show Figures

Figure 1

23 pages, 4485 KiB  
Article
FOXS-GSC—Fast Offset Xpath Service with HexagonS Communication
Informatics 2023, 10(3), 56; https://doi.org/10.3390/informatics10030056 - 04 Jul 2023
Viewed by 782
Abstract
Congestion in large cities is widely recognized as a problem that impacts various aspects of society, including the economy and public health. To support the urban traffic system and to mitigate traffic congestion and the damage it causes, in this article we propose [...] Read more.
Congestion in large cities is widely recognized as a problem that impacts various aspects of society, including the economy and public health. To support the urban traffic system and to mitigate traffic congestion and the damage it causes, in this article we propose an assistant Intelligent Transport Systems (ITS) service for traffic management in Vehicular Networks (VANET), which we name FOXS-GSC, for Fast Offset Xpath Service with hexaGonS Communication. FOXS-GSC uses a VANET communication and fog computing paradigm to detect and recommend an alternative vehicle route to avoid traffic jams. Unlike the previous solutions in the literature, the proposed service offers a versatile approach in which traffic road classification and route suggestions can be made by infrastructure or by the vehicle itself without compromising the quality of the route service. To achieve this, the service operates in a decentralized way, and the components of the service (vehicles/infrastructure) exchange messages containing vehicle information and regional traffic information. For communication, the proposed approach uses a new dedicated multi-hop protocol that has been specifically designed based on the characteristics and requirements of a vehicle routing service. Therefore, by adapting to the inherent characteristics of a vehicle routing service, such as the density of regions, the proposed communication protocol both enhances reliability and improves the overall efficiency of the vehicle routing service. Simulation results comparing FOXS-GSC with baseline solutions and other proposals from the literature demonstrate its significant impact, reducing network congestion by up to 95% while maintaining a coverage of 97% across various scenery characteristics. Concerning road traffic efficiency, the traffic quality is increasing by 29%, for a reduction in carbon emissions of 10%. Full article
Show Figures

Figure 1

10 pages, 2139 KiB  
Article
Classification of Benign and Malignant Renal Tumors Based on CT Scans and Clinical Data Using Machine Learning Methods
Informatics 2023, 10(3), 55; https://doi.org/10.3390/informatics10030055 - 03 Jul 2023
Cited by 1 | Viewed by 1872
Abstract
Up to 20% of renal masses ≤4 cm is found to be benign at the time of surgical excision, raising concern for overtreatment. However, the risk of malignancy is currently unable to be accurately predicted prior to surgery using imaging alone. The objective [...] Read more.
Up to 20% of renal masses ≤4 cm is found to be benign at the time of surgical excision, raising concern for overtreatment. However, the risk of malignancy is currently unable to be accurately predicted prior to surgery using imaging alone. The objective of this study is to propose a machine learning (ML) framework for pre-operative renal tumor classification using readily available clinical and CT imaging data. We tested both traditional ML methods (i.e., XGBoost, random forest (RF)) and deep learning (DL) methods (i.e., multilayer perceptron (MLP), 3D convolutional neural network (3DCNN)) to build the classification model. We discovered that the combination of clinical and radiomics features produced the best results (i.e., AUC [95% CI] of 0.719 [0.712–0.726], a precision [95% CI] of 0.976 [0.975–0.978], a recall [95% CI] of 0.683 [0.675–0.691], and a specificity [95% CI] of 0.827 [0.817–0.837]). Our analysis revealed that employing ML models with CT scans and clinical data holds promise for classifying the risk of renal malignancy. Future work should focus on externally validating the proposed model and features to better support clinical decision-making in renal cancer diagnosis. Full article
(This article belongs to the Special Issue Feature Papers in Medical and Clinical Informatics)
Show Figures

Figure 1

Previous Issue
Next Issue
Back to TopTop