Computers doi: 10.3390/computers13030080
Authors: Qiang Chu Chao Ping Chen Haiyang Hu Xiaojun Wu Baoen Han
Text input using hand gestures is an essential component of human–computer interaction technology, providing users with a more natural and enriching interaction experience. Nevertheless, the current gesture input methods have a variety of issues, including a high learning cost for users, poor input performance, and reliance on hardware. To solve these problems and better meet the interaction requirements, a hand recognition-based text input method called iHand is proposed in this paper. In iHand, a two-branch hand recognition algorithm combining a landmark model and a lightweight convolutional neural network is used. The landmark model is used as the backbone network to extract hand landmarks, and then an optimized classification head, which can preserve the space structure of landmarks, is designed to classify gestures. When the landmark model fails to extract hand landmarks, a lightweight convolutional neural network is employed for classification. Regarding the way letters are entered, to reduce the learning cost, the sequence of letters is mapped as a two-dimensional layout, and users can type with seven simple hand gestures. Experimental results on the public datasets show that the proposed hand recognition algorithm achieves high robustness compared to state-of-the-art approaches. Furthermore, we tested the performance of users’ initial use of iHand for text input. The results showed that the iHand’s average input speed was 5.6 words per minute, with the average input error rate was only 1.79%.
]]>Computers doi: 10.3390/computers13030079
Authors: Matheus Marabesi Alicia García-Holgado Francisco José García-Peñalvo
Test-driven development (TDD) is an agile practice of writing test code before production code, following three stages: red, green, and refactor. In the red stage, the test code is written; in the green stage, the minimum code necessary to make the test pass is implemented, and in the refactor stage, improvements are made to the code. This practice is widespread across the industry, and various studies have been conducted to understand its benefits and impacts on the software development process. Despite its popularity, TDD studies often focus on the technical aspects of the practice, such as the external/internal quality of the code, productivity, test smells, and code comprehension, rather than the context in which it is practiced. In this paper, we present a systematic literature review using Scopus, Web of Science, and Google Scholar that focuses on the TDD practice and the influences that lead to the introduction of test smells/anti-patterns in the test code. The findings suggest that organizational structure influences the testing strategy. Additionally, there is a tendency to use test smells and TDD anti-patterns interchangeably, and test smells negatively impact code comprehension. Furthermore, TDD styles and the relationship between TDD practice and the generation of test smells are frequently overlooked in the literature.
]]>Computers doi: 10.3390/computers13030078
Authors: Timothé Verstraete Naveed Muhammad
Pedestrian collision avoidance is a crucial task in the development and democratization of autonomous vehicles. The aim of this review is to provide an accessible overview of the pedestrian collision avoidance systems in autonomous vehicles that have been proposed by the scientific community over the last ten years. For this purpose, we propose a classification of studies in the literature in terms of the following: (i) pedestrian detection methods, (ii) collision avoidance approaches, (iii) actions, (iv) computing methods, and (v) test methods.
]]>Computers doi: 10.3390/computers13030077
Authors: Richard Chbeir Mirjana Ivanovic Yannis Manolopoulos Claudio Silvestri
The 27th International Database Engineering and Applications Symposium (IDEAS-2023) was held in Heraklion, Crete, Greece, on 5–7 May 2023 [...]
]]>Computers doi: 10.3390/computers13030076
Authors: Eduard Puerto Jose Aguilar Angel Pinto
Currently, approaches to correcting misspelled words have problems when the words are complex or massive. This is even more serious in the case of Spanish, where there are very few studies in this regard. So, proposing new approaches to word recognition and correction remains a research topic of interest. In particular, an interesting approach is to computationally simulate the brain process for recognizing misspelled words and their automatic correction. Thus, this article presents an automatic recognition and correction system of misspelled words in Spanish texts, for the detection of misspelled words, and their automatic amendments, based on the systematic theory of pattern recognition of the mind (PRTM). The main innovation of the research is the use of the PRTM theory in this context. Particularly, a corrective system of misspelled words in Spanish based on this theory, called Ar2p-Text, was designed and built. Ar2p-Text carries out a recursive process of analysis of words by a disaggregation/integration mechanism, using specialized hierarchical recognition modules that define formal strategies to determine if a word is well or poorly written. A comparative evaluation shows that the precision and coverage of our Ar2p-Text model are competitive with other spell-checkers. In the experiments, the system achieves better performance than the three other systems. In general, Ar2p-Text obtains an F-measure of 83%, above the 73% achieved by the other spell-checkers. Our hierarchical approach reuses a lot of information, allowing for the improvement of the text analysis processes in both quality and efficiency. Preliminary results show that the above will allow for future developments of technologies for the correction of words inspired by this hierarchical approach.
]]>Computers doi: 10.3390/computers13030075
Authors: Artyom V. Gorchakov Liliya A. Demidova Peter N. Sovietov
Modern software systems consist of many software components; the source code of modern software systems is hard to understand and maintain for new developers. Aiming to simplify the readability and understandability of source code, companies that specialize in software development adopt programming standards, software design patterns, and static analyzers with the aim of decreasing the complexity of software. Recent research introduced a number of code metrics allowing the numerical characterization of the maintainability of code snippets. Cyclomatic Complexity (CycC) is one widely used metric for measuring the complexity of software. The value of CycC is equal to the number of decision points in a program plus one. However, CycC does not take into account the nesting levels of the syntactic structures that break the linear control flow in a program. Aiming to resolve this, the Cognitive Complexity (CogC) metric was proposed as a successor to CycC. In this paper, we describe a rule-based algorithm and its specializations for measuring the complexity of programs. We express the CycC and CogC metrics by means of the described algorithm and propose a new complexity metric named Educational Complexity (EduC) for use in educational digital environments. EduC is at least as strict as CycC and CogC are and includes additional checks that are based on definition-use graph analysis of a program. We evaluate the CycC, CogC, and EduC metrics using the source code of programs submitted to a Digital Teaching Assistant (DTA) system that automates a university programming course. The obtained results confirm that EduC rejects more overcomplicated and difficult-to-understand programs in solving unique programming exercises generated by the DTA system when compared to CycC and CogC.
]]>Computers doi: 10.3390/computers13030074
Authors: Hibatul Azizi Hisyam Ng Toktam Mahmoodi
Novel architectures incorporating transport networks and artificial intelligence (AI) are currently being developed for beyond 5G and 6G technologies. Given that the interfacing mobile and transport network nodes deliver high transactional packet volume in downlink and uplink streams, 6G networks envision adopting diverse transport networks, including non-terrestrial types of transport networks such as the satellite network, High-Altitude Platform Systems (HAPS), and DOCSIS cable TV. Hence, there is a need to match the traffic to the transport network. This paper focuses on such a matching problem and defines a method that leverages machine learning and mixed-integer linear programming. Consequently, the proposed scheme in this paper is to develop a traffic steering capability based on types of transport networks, namely, optical, satellite, and DOCSIS cable. Novel findings demonstrate a more than 90% accuracy of steered traffic to respective types of transport networks for dedicated transport network resources.
]]>Computers doi: 10.3390/computers13030073
Authors: Mingyuan Hu Hyeongki Ahn Hyein Kang Yoonuh Chung Kwanho You
As control algorithms evolve, their enhanced performance is often accompanied by increased complexity, reaching a point where practical experimentation becomes unfeasible. This situation has led to many theoretical studies relying solely on simulations without experimental verification. To address this gap, this study introduces a rapid experimentation protocol (REP) for applying field-oriented control (FOC) strategies to permanent magnet synchronous motors (PMSMs) based on model-based design (MBD) and automated code generation. REP is designed to be user-friendly and straightforward, offering a less complex and more accessible alternative to DSP toolboxes. Its excellent hardware compatibility is conducive to code porting and development. With this protocol, users can quickly conduct FOC strategy experiments with reduced dependency on the complex automated code generation tools often associated with toolboxes. Centered around the PMSM model, this method utilizes only the fundamental modules of MATLAB2023b/Simulink, greatly simplifying the user experience. To demonstrate the feasibility and efficiency of the protocol, models for both sensor-based and sensorless control are developed. The practicality of REP, including sensor-based and sensorless experiments, is successfully validated on an arm-cortex-M4-based GD32 microcontroller.
]]>Computers doi: 10.3390/computers13030071
Authors: Van-Nam Pham Quang-Huy Do Ba Duc-Anh Tran Le Quang-Minh Nguyen Dinh Do Van Linh Nguyen
Most of the cashew nuts in the world are produced in the developing countries. Hence, there is a need to have a low-cost system to automatically grade cashew nuts, especially in small-scale farms, to improve mechanization and automation in agriculture, helping reduce the price of the products. To address this issue, in this work we first propose a low-cost grading system for cashew nuts by using the off-the-shelf equipment. The most important but complicated part of the system is its “eye”, which is required to detect and classify the nuts into different grades. To this end, we propose to exploit advantages of both the YOLOv8 and Transformer models and combine them in one single model. More specifically, we develop a module called SC3T that can be employed to integrate into the backbone of the YOLOv8 architecture. In the SC3T module, a Transformer block is dexterously integrated into along with the C3TR module. More importantly, the classifier is not only efficient but also compact, which can be implemented in an embedded device of our developed cashew nut grading system. The proposed classifier, called the YOLOv8–Transformer model, can enable our developed grading system, through a low-cost camera, to correctly detect and accurately classify the cashew nuts into four quality grades. In our grading system, we also developed an actuation mechanism to efficiently sort the nuts according to the classification results, getting the products ready for packaging. To verify the effectiveness of the proposed classifier, we collected a dataset from our sorting system, and trained and tested the model. The obtained results demonstrate that our proposed approach outperforms all the baseline methods given the collected image data.
]]>Computers doi: 10.3390/computers13030072
Authors: Nino Adamashvili Nino Zhizhilashvili Caterina Tricase
The study presents a comprehensive examination of the recent advancements in the field of wine production using the Internet of Things (IoT), Artificial Intelligence (AI), and Blockchain Technology (BCT). The paper aims to provide insights into the implementation of these technologies in the wine supply chain and to identify the potential benefits associated with their use. The study highlights the various applications of IoT, AI, and BCT in wine production, including vineyard management, wine quality control, and supply chain management. It also discusses the potential benefits of these technologies, such as improved efficiency, increased transparency, and reduced costs. The study concludes by presenting the framework proposed by the authors in order to overcome the challenges associated with the implementation of these technologies in the wine supply chain and suggests areas for future research. The proposed framework meets the challenges of lack of transparency, lack of ecosystem management in the wine industry and irresponsible spending associated with the lack of monitoring and prediction tools. Overall, the study provides valuable insights into the potential of IoT, AI, and BCT in optimizing the wine supply chain and offers a comprehensive review of the existing literature on the study subject.
]]>Computers doi: 10.3390/computers13030070
Authors: Alexander Rosbak-Mortensen Marco Jansen Morten Muhlig Mikkel Bjørndahl Kristensen Tøt Ivan Nikolov
Automatic anomaly detection plays a critical role in surveillance systems but requires datasets comprising large amounts of annotated data to train and evaluate models. Gathering and annotating these data is a labor-intensive task that can become costly. A way to circumvent this is to use synthetic data to augment anomalies directly into existing datasets. This far more diverse scenario can be created and come directly with annotations. This however also poses new issues for the computer-vision engineer and researcher end users, who are not readily familiar with 3D modeling, game development, or computer graphics methodologies and must rely on external specialists to use or tweak such pipelines. In this paper, we extend our previous work of an application that synthesizes dataset variations using 3D models and augments anomalies on real backgrounds using the Unity Engine. We developed a high-usability user interface for our application through a series of RITE experiments and evaluated the final product with the help of deep-learning specialists who provided positive feedback regarding its usability, accessibility, and user experience. Finally, we tested if the proposed solution can be used in the context of traffic surveillance by augmenting the train data from the challenging Street Scene dataset. We found that by using our synthetic data, we could achieve higher detection accuracy. We also propose the next steps to expand the proposed solution for better usability and render accuracy through the use of segmentation pre-processing.
]]>Computers doi: 10.3390/computers13030069
Authors: Pedro Teixeira Celeste Eusébio Leonor Teixeira
The right to tourism has become a crucial aspect of society. Through more accessible tourism, it is possible to improve travel conditions for people with disabilities. Nonetheless, barriers still exist, with the lack of information about accessibility conditions representing a main obstacle. Information systems can help overcome these hurdles. However, it is verified that methodologies to support the development of accessible IS are currently very scarce. Thus, this study intends to develop an accessible IS for accessible tourism and propose a roadmap to support the creation of accessible IS solutions. To obtain the intended accessible tourism solution, an action research methodology was followed, which involved adapting already established frameworks, that combine Agile development and user-centered design techniques. Following the methodology, a web application named access@tour by action was created. This mobile solution is capable of improving information management within the accessible tourism market. From this experimental study, a proposal for a methodological roadmap emerged. This roadmap helps to better understand how to develop accessible IS by demonstrating techniques for gathering accessibility requirements and validating them. The roadmap is adaptable and suitable for IS projects involving accessibility. Both results provide a better perspective on how to integrate accessibility during the development of IS, possibly supporting future researchers in creating accessible solutions.
]]>Computers doi: 10.3390/computers13030068
Authors: Emi Iryanti Paulus Insap Santosa Sri Suning Kusumawardani Indriana Hidayah
Nielsen’s heuristics are widely recognized for usability evaluation, but they are often considered insufficiently specific for assessing particular domains, such as e-learning. Currently, e-learning plays a pivotal role in higher education because of the shift in the educational paradigm from a teacher-centered approach to a student-centered approach. The criteria utilized in multiple sets of heuristics for evaluating e-learning are carefully examined based on the definitions of each criterion. If there are similarities in meaning among these criteria, they are consolidated into a single criterion, resulting in the creation of 20 new criteria (spanning three primary aspects) for the evaluation of e-learning. These 20 new criteria encompass key aspects related to the user interface, learning development, and motivation. Each aspect is assigned a weight to facilitate prioritization when implementing improvements to evaluate e-learning, which is especially beneficial for institutions with limited resources responsible for the relevant units. In terms of weighting, there is room for enhancement to attain more optimal weighting outcomes by employing a Fuzzy Preference Programming method known as Inverse Trigonometric Fuzzy Preference Programming (ITFPP). The higher the assigned weight, the greater the priority for implementing improvements.
]]>Computers doi: 10.3390/computers13030067
Authors: Mario Casillo Liliana Cecere Francesco Colace Angelo Lorusso Domenico Santaniello
Integrating modern and innovative technologies such as the Internet of Things (IoT) and Machine Learning (ML) presents new opportunities in healthcare, especially in medical spa therapies. Once considered palliative, these therapies conducted using mineral/thermal water are now recognized as a targeted and specific therapeutic modality. The peculiarity of these treatments lies in their simplicity of administration, which allows for prolonged treatments, often lasting weeks, with progressive and controlled therapeutic effects. Thanks to new technologies, it will be possible to continuously monitor the patient, both on-site and remotely, increasing the effectiveness of the treatment. In this context, wearable devices, such as smartwatches, facilitate non-invasive monitoring of vital signs by collecting precise data on several key parameters, such as heart rate or blood oxygenation level, and providing a perspective of detailed treatment progress. The constant acquisition of data thanks to the IoT, combined with the advanced analytics of ML technologies, allows for data collection and precise analysis, allowing real-time monitoring and personalized treatment adaptation. This article introduces an IoT-based framework integrated with ML techniques to monitor spa treatments, providing tailored customer management and more effective results. A preliminary experimentation phase was designed and implemented to evaluate the system’s performance through evaluation questionnaires. Encouraging preliminary results have shown that the innovative approach can enhance and highlight the therapeutic value of spa therapies and their significant contribution to personalized healthcare.
]]>Computers doi: 10.3390/computers13030066
Authors: János Hollósi Áron Ballagi Gábor Kovács Szabolcs Fischer Viktor Nagy
Monitoring bus driver behavior and posture in urban public transport’s dynamic and unpredictable environment requires robust real-time analytics systems. Traditional camera-based systems that use computer vision techniques for facial recognition are foundational. However, they often struggle with real-world challenges such as sudden driver movements, active driver–passenger interactions, variations in lighting, and physical obstructions. Our investigation covers four different neural network architectures, including two variations of convolutional neural networks (CNNs) that form the comparative baseline. The capsule network (CapsNet) developed by our team has been shown to be superior in terms of efficiency and speed in facial recognition tasks compared to traditional models. It offers a new approach for rapidly and accurately detecting a driver’s head position within the wide-angled view of the bus driver’s cabin. This research demonstrates the potential of CapsNets in driver head and face detection and lays the foundation for integrating CapsNet-based solutions into real-time monitoring systems to enhance public transportation safety protocols.
]]>Computers doi: 10.3390/computers13030065
Authors: Diana Pérez-Marín Raquel Hijón-Neira Celeste Pizarro
Pedagogic Conversational Agents (PCAs) are interactive systems that engage the student in a dialogue to teach some domain. They can have the roles of a teacher, student, or companion, and adopt several shapes. In our previous work, a significant increase of students’ performance when learning programming was found when using PCAs in the teacher role. However, it is not common to find PCAs used in classrooms. In this paper, it is explored whether pre-service teachers would accept PCAs to teach programming better if they were co-designed with them. Pre-service teachers are chosen because they are still in training, so they can be taught what PCAs are and how this technology could be helpful. Moreover, pre-service teachers can choose whether they integrate PCAs in the teaching activities that they carry out as part of their degree’s course. An experiment with 35 pre-service primary education teachers was carried out during the 2021/2022 academic year to co-design a robotic PCA to teach programming. The experience validates the idea that involving pre-service teachers in the design of a PCA facilitates their involvement to integrate this technology in their classrooms. In total, 97% of the pre-service teachers that stated in a survey that they believed robot PCA could help children to learn programming, and 80% answered that they would like to use them in their classrooms.
]]>Computers doi: 10.3390/computers13030064
Authors: Dragos Alexandru Andrioaia Vasile Gheorghita Gaitan George Culea Ioan Viorel Banu
Over the past decade, Unmanned Aerial Vehicles (UAVs) have begun to be increasingly used due to their untapped potential. Li-ion batteries are the most used to power electrically operated UAVs for their advantages, such as high energy density and the high number of operating cycles. Therefore, it is necessary to estimate the Remaining Useful Life (RUL) and the prediction of the Li-ion batteries’ capacity to prevent the UAVs’ loss of autonomy, which can cause accidents or material losses. In this paper, the authors propose a method of prediction of the RUL for Li-ion batteries using a data-driven approach. To maximize the performance of the process, the performance of three machine learning models, Support Vector Machine for Regression (SVMR), Multiple Linear Regression (MLR), and Random Forest (RF), were compared to estimate the RUL of Li-ion batteries. The method can be implemented within UAVs’ Predictive Maintenance (PdM) systems.
]]>Computers doi: 10.3390/computers13030063
Authors: Sotirios Kontogiannis Stefanos Koundouras Christos Pikridas
Novel monitoring architecture approaches are required to detect viticulture diseases early. Existing micro-climate decision support systems can only cope with late detection from empirical and semi-empirical models that provide less accurate results. Such models cannot alleviate precision viticulture planning and pesticide control actions, providing early reconnaissances that may trigger interventions. This paper presents a new plant-level monitoring architecture called thingsAI. The proposed system utilizes low-cost, autonomous, easy-to-install IoT sensors for vine-level monitoring, utilizing the low-power LoRaWAN protocol for sensory measurement acquisition. Facilitated by a distributed cloud architecture and open-source user interfaces, it provides state-of-the-art deep learning inference services and decision support interfaces. This paper also presents a new deep learning detection algorithm based on supervised fuzzy annotation processes, targeting downy mildew disease detection and, therefore, planning early interventions. The authors tested their proposed system and deep learning model on the grape variety of protected designation of origin called debina, cultivated in Zitsa, Greece. From their experimental results, the authors show that their proposed model can detect vine locations and timely breakpoints of mildew occurrences, which farmers can use as input for targeted intervention efforts.
]]>Computers doi: 10.3390/computers13030062
Authors: Arthur Yosef Idan Roth Eli Shnaider Amos Baranes Moti Schneider
Association rule learning is a machine learning approach aiming to find substantial relations among attributes within one or more datasets. We address the main problem of this technology, which is the excessive computation time and the memory requirements needed for the processing of discovering the association rules. Most of the literature pertaining to the association rules deals extensively with these issues as major obstacles, especially for very large databases. In this paper, we introduce a method that requires substantially lowers the run time and memory requirements in comparison to the methods presently in use (reduction from O(2m) to O2m2 in the worst case).
]]>Computers doi: 10.3390/computers13030061
Authors: Anastasia Peshkovskaya Sergey Chudinov Galina Serbina Alexander Gubanov
As network structure of virtual communities related to suicide and school shooting still remains unaddressed in scientific literature, we employed basic demographics analysis and social network analysis (SNA) to show common features, as well as distinct facets in the communities’ structure and their followers’ network. Open and publicly accessible data of over 16,000 user accounts were collected with a social media monitoring system. Results showed that adolescents and young adults were the main audience of suicide-related and school shooting fan communities. List of blocked virtual groups related to school shooting was more extensive than that of suicide, which indicates a high radicalization degree of school shooting virtual groups. The homogeneity of followers’ interests was more typical for subscribers of suicide-related communities. A social network analysis showed that followers of school shooting virtual groups were closely interconnected with their peers, and their network was monolithic, while followers of suicide-related virtual groups were fragmented into numerous communities, so presence of a giant connected component in their network can be questioned. We consider our results highly relevant for better understanding the network aspects of virtual information existence, harmful information spreading, and its potential impact on society.
]]>Computers doi: 10.3390/computers13030060
Authors: Dimitrios Chatziamanetoglou Konstantinos Rantos
Cyber Threat Intelligence (CTI) has become increasingly important in safeguarding organizations against cyber threats. However, managing, storing, analyzing, and sharing vast and sensitive threat intelligence data is a challenge. Blockchain technology, with its robust and tamper-resistant properties, offers a promising solution to address these challenges. This systematic literature review explores the recent advancements and emerging trends at the intersection of CTI and blockchain technology. We reviewed research papers published during the last 5 years to investigate the various proposals, methodologies, models, and implementations related to the distributed ledger technology and how this technology can be used to collect, store, analyze, and share CTI in a secured and controlled manner, as well as how this combination can further support additional dimensions such as quality assurance, reputation, and trust. Our findings highlight the focus of the CTI and blockchain convergence on the dissemination phase in the CTI lifecycle, reflecting a substantial emphasis on optimizing the efficacy of communication and sharing mechanisms, based on an equitable emphasis on both permissioned, private blockchains and permissionless, public blockchains, addressing the diverse requirements and preferences within the CTI community. The analysis reveals a focus towards the tactical and technical dimensions of CTI, compared to the operational and strategic CTI levels, indicating an emphasis on more technical-oriented utilization within the domain of blockchain technology. The technological landscape supporting CTI and blockchain integration emerges as multifaceted, featuring pivotal roles played by smart contracts, machine learning, federated learning, consensus algorithms, IPFS, deep learning, and encryption. This integration of diverse technologies contributes to the robustness and adaptability of the proposed frameworks. Moreover, our exploration unveils the overarching significance of trust and privacy as predominant themes, underscoring their pivotal roles in shaping the landscape within our research realm. Additionally, our study addresses the maturity assessment of these integrated systems. The approach taken in evaluating maturity levels, distributed across the Technology Readiness Level (TRL) scale, reveals an average balance, indicating that research efforts span from early to mid-stages of maturity in implementation. This study signifies the ongoing evolution and maturation of research endeavors within the dynamic intersection of CTI and blockchain technology, identifies trends, and also highlights research gaps that can potentially be addressed by future research on the field.
]]>Computers doi: 10.3390/computers13030059
Authors: Ryan Baker del Aguila Carlos Daniel Contreras Pérez Alejandra Guadalupe Silva-Trujillo Juan C. Cuevas-Tello Jose Nunez-Varela
Recent advancements in cybersecurity threats and malware have brought into question the safety of modern software and computer systems. As a direct result of this, artificial intelligence-based solutions have been on the rise. The goal of this paper is to demonstrate the efficacy of memory-optimized machine learning solutions for the task of static analysis of software metadata. The study comprises an evaluation and comparison of the performance metrics of three popular machine learning solutions: artificial neural networks (ANN), support vector machines (SVMs), and gradient boosting machines (GBMs). The study provides insights into the effectiveness of memory-optimized machine learning solutions when detecting previously unseen malware. We found that ANNs shows the best performance with 93.44% accuracy classifying programs as either malware or legitimate even with extreme memory constraints.
]]>Computers doi: 10.3390/computers13030058
Authors: Lampros Karavidas Georgina Skraparli Thrasyvoulos Tsiatsos
The rapid changes in digital technology have had a substantial influence on education, resulting in the development of learning technologies (LTs) such as multimedia, computer-based training, intelligent tutoring systems, serious games, social media, and pedagogical agents. Serious games have demonstrated their effectiveness in several domains, while there is contradictory data on their efficiency in modifying behavior and their possible disadvantages. Serious games are games that are specifically created to fulfill a primary goal other than entertainment. The objective of our study is to evaluate the effectiveness of a serious game designed for the self-assessment of students concerning their knowledge of web technologies on students with an equivalent online quiz that uses the same collection of questions. The primary hypotheses we stated were that those utilizing the serious game would experience better results in terms of engagement, subjective experience, and learning compared to those using the online quiz. To examine these research questions, the IMI questionnaire, the total number of completed questions, and post-test grades were utilized to compare the two groups, which consisted of 34 undergraduate students. Our findings indicate that the serious game users did not have a better experience or better learning outcomes, but that they engaged more, answering significantly more questions. Future steps include finding more participants and extending the experimental period.
]]>Computers doi: 10.3390/computers13030057
Authors: Aldo Xhako Antonis Katzourakis Theodoros Evdaimon Emmanouil Zidianakis Nikolaos Partarakis Xenophon Zabulis
In this paper, we present a comprehensive methodology to support the multifaceted process involved in the digitization, curation, and virtual exhibition of cultural heritage artifacts. The proposed methodology is applied in the context of a unique collection of contemporary dresses inspired by antiquity. Leveraging advanced 3D technologies, including lidar scanning and photogrammetry, we meticulously captured and transformed physical garments into highly detailed digital models. The postprocessing phase refined these models, ensuring an accurate representation of the intricate details and nuances inherent in each dress. Our collaborative efforts extended to the dissemination of this digital cultural heritage, as we partnered with the national aggregator in Greece, SearchCulture, to facilitate widespread access. The aggregation process streamlined the integration of our digitized content into a centralized repository, fostering cultural preservation and accessibility. Furthermore, we harnessed the power of these 3D models to transcend traditional exhibition boundaries, crafting a virtual experience that transcends geographical constraints. This virtual exhibition not only enables online exploration but also invites participants to immerse themselves in a captivating virtual reality environment. The synthesis of cutting-edge digitization techniques, cultural aggregation, and immersive exhibition design not only contributes to the preservation of contemporary cultural artifacts but also redefines the ways in which audiences engage with and experience cultural heritage in the digital age.
]]>Computers doi: 10.3390/computers13020056
Authors: Hancheng Zuo Bernard Tiddeman
In this paper, we investigate the inpainting of normal maps that were captured from a lightstage. Occlusion of parts of the face during performance capture can be caused by the movement of, e.g., arms, hair, or props. Inpainting is the process of interpolating missing areas of an image with plausible data. We build on previous works about general image inpainting that use generative adversarial networks (GANs). We extend our previous work on normal map inpainting to use a U-Net structured generator network. Our method takes into account the nature of the normal map data and so requires modification of the loss function. We use a cosine loss rather than the more common mean squared error loss when training the generator. Due to the small amount of training data available, even when using synthetic datasets, we require significant augmentation, which also needs to take account of the particular nature of the input data. Image flipping and inplane rotations need to properly flip and rotate the normal vectors. During training, we monitor key performance metrics including the average loss, structural similarity index measure (SSIM), and peak signal-to-noise ratio (PSNR) of the generator, alongside the average loss and accuracy of the discriminator. Our analysis reveals that the proposed model generates high-quality, realistic inpainted normal maps, demonstrating the potential for application to performance capture. The results of this investigation provide a baseline on which future researchers can build with more advanced networks and comparison with inpainting of the source images used to generate the normal maps.
]]>Computers doi: 10.3390/computers13020055
Authors: Alaa Eleyan Ebrahim Alboghbaish
Cardiovascular diseases (CVDs) like arrhythmia and heart failure remain the world’s leading cause of death. These conditions can be triggered by high blood pressure, diabetes, and simply the passage of time. The early detection of these heart issues, despite substantial advancements in artificial intelligence (AI) and technology, is still a significant challenge. This research addresses this hurdle by developing a deep-learning-based system that is capable of predicting arrhythmias and heart failure from abnormalities in electrocardiogram (ECG) signals. The system leverages a model that combines long short-term memory (LSTM) networks with convolutional neural networks (CNNs). Extensive experiments were conducted using ECG data from both the MIT-BIH and BIDMC databases under two scenarios. The first scenario employed data from five distinct ECG classes, while the second focused on classifying data from three classes. The results from both scenarios demonstrated that the proposed deep-learning-based classification approach outperformed existing methods.
]]>Computers doi: 10.3390/computers13020054
Authors: Georgios Lambropoulos Sarandis Mitropoulos Christos Douligeris Leandros Maglaras
The widespread adoption of cloud computing has resulted in centralized datacenter structures; however, there is a requirement for smaller-scale distributed infrastructures to meet the demands for speed, responsiveness, and security for critical applications. Single-Board Computers (SBCs) present numerous advantages such as low power consumption, low cost, minimal heat emission, and high processing power, making them suitable for applications such as the Internet of Things (IoT), experimentation, and other advanced projects. This paper investigates the possibility of adopting virtualization technology on Single-Board Computers (SBCs) for the implementation of reliable and cost-efficient edge-computing environments.The results of this study are based on experimental implementations and testing conducted in the course of a case study performed on the edge infrastructure of a financial organization, where workload migration was achieved from a traditional to an SBC-based edge infrastructure. The performance of the two infrastructures was studied and compared during this process, providing important insights into the power efficiency gains, resource utilization, and overall suitability for the organization’s operational needs.
]]>Computers doi: 10.3390/computers13020053
Authors: Amaan Jamil Gyorgy Denes
Over 300 million people who live with color vision deficiency (CVD) have a decreased ability to distinguish between colors, limiting their ability to interact with websites and software packages. User-interface designers have taken various approaches to tackle the issue, with most offering a high-contrast mode. The Web Content Accessibility Guidelines (WCAG) outline some best practices for maintaining accessibility that have been adopted and recommended by several governments; however, it is currently uncertain how this impacts perceived user functionality and if this could result in a reduced aesthetic look. In the absence of subjective data, we aim to investigate how a CVD observer might rate the functionality and aesthetics of existing UIs. However, the design of a comparative study of CVD vs. non-CVD populations is inherently hard; therefore, we build on the successful field of physiologically based CVD models and propose a novel simulation-based experimental protocol, where non-CVD observers rate the relative aesthetics and functionality of screenshots of 20 popular websites as seen in full color vs. with simulated CVD. Our results show that relative aesthetics and functionality correlate positively and that an operating-system-wide high-contrast mode can reduce both aesthetics and functionality. While our results are only valid in the context of simulated CVD screenshots, the approach has the benefit of being easily deployable, and can help to spot a number of common pitfalls in production. Finally, we propose a AAA–A classification of the interfaces we analyzed.
]]>Computers doi: 10.3390/computers13020052
Authors: Susmita Haldar Luiz Fernando Capretz
Software defect prediction models enable test managers to predict defect-prone modules and assist with delivering quality products. A test manager would be willing to identify the attributes that can influence defect prediction and should be able to trust the model outcomes. The objective of this research is to create software defect prediction models with a focus on interpretability. Additionally, it aims to investigate the impact of size, complexity, and other source code metrics on the prediction of software defects. This research also assesses the reliability of cross-project defect prediction. Well-known machine learning techniques, such as support vector machines, k-nearest neighbors, random forest classifiers, and artificial neural networks, were applied to publicly available PROMISE datasets. The interpretability of this approach was demonstrated by SHapley Additive exPlanations (SHAP) and local interpretable model-agnostic explanations (LIME) techniques. The developed interpretable software defect prediction models showed reliability on independent and cross-project data. Finally, the results demonstrate that static code metrics can contribute to the defect prediction models, and the inclusion of explainability assists in establishing trust in the developed models.
]]>Computers doi: 10.3390/computers13020051
Authors: Jacob D. Hauenstein Timothy S. Newman
Approaches aimed at achieving improved energy efficiency for determination of descriptors—used in volumetric data analysis and one common mode of scientific visualisation—in one x86-class setting are described and evaluated. These approaches are evaluated against standard approaches for the computational setting. In all, six approaches for improved efficiency are considered. Four of them are computation-based. The other two are memory-based. The descriptors are classic gradient and curvature descriptors. In addition to their use in volume analyses, they are used in the classic ray-casting-based direct volume rendering (DVR), which is a particular application area of interest here. An ideal combination of the described approaches applied to gradient descriptor determination allowed them to to be computed with only 80% of the energy of a standard approach in the computational setting; energy efficiency was improved by a factor of 1.2. For curvature descriptor determination, the ideal combination of described approaches achieved a factor-of-two improvement in energy efficiency.
]]>Computers doi: 10.3390/computers13020050
Authors: José María Fernández-Batanero Marta Montenegro-Rueda José Fernández-Cerero Eloy López-Meneses
The use of Extended Reality in Primary Education classrooms has emerged as a transformative element that enhances the teaching and learning process of students. In this context, examining the various effects that this tool can generate is essential to identify both the opportunities and limitations that teachers face when incorporating this technology into their practices. The aim of this research is to analyse the impact of the use of Extended Reality as an educational resource in Primary Education, focusing on teachers’ perceptions. The information was collected through semi-structured interviews with 36 active teachers in Primary Education. The analysis of the data obtained identifies the benefits and functionalities offered by the implementation of Extended Reality in Primary Education classrooms, as well as the uncertainties and concerns that teachers have with the implementation of Extended Reality. The results highlight the significant opportunities that Extended Reality offers in the teaching–learning process, provided that teachers are adequately trained. Furthermore, this study offers valuable recommendations to guide future teachers and researchers in the successful integration of this technology into the educational process.
]]>Computers doi: 10.3390/computers13020049
Authors: Vasileios Sevetlidis George Pavlidis Spyridon G. Mouroutsos Antonios Gasteratos
Identifying accidents in road black spots is crucial for improving road safety. Traditional methodologies, although insightful, often struggle with the complexities of imbalanced datasets, while machine learning (ML) techniques have shown promise, our previous work revealed that supervised learning (SL) methods face challenges in effectively distinguishing accidents that occur in black spots from those that do not. This paper introduces a novel approach that leverages positive-unlabeled (PU) learning, a technique we previously applied successfully in the domain of defect detection. The results of this work demonstrate a statistically significant improvement in key performance metrics, including accuracy, precision, recall, F1-score, and AUC, compared to SL methods. This study thus establishes PU learning as a more effective and robust approach for accident classification in black spots, particularly in scenarios with highly imbalanced datasets.
]]>Computers doi: 10.3390/computers13020048
Authors: Kamil Andrzej Daniel Paweł Kowol Grazia Lo Sciuto
Several strategies for navigation in unfamiliar environments have been explored, notably leveraging advanced sensors and control algorithms for obstacle recognition in autonomous vehicles. This study introduces a novel approach featuring a redesigned joystick equipped with stepper motors and linear drives, facilitating WiFi communication with a four-wheel omnidirectional electric vehicle. The system’s drive units integrated into the joystick and the encompassing control algorithms are thoroughly examined, including analysis of stick deflection measurement and inter-component communication within the joystick assembly. Unlike conventional setups in which the joystick is tilted by the operator, two independent linear drives are employed to generate ample tensile force, effectively “overpowering” the operator’s input. Running on a Raspberry Pi, the software utilizes Python programming to enable joystick tilt control and to transmit orientation and axis deflection data to an Arduino unit. A fundamental haptic effect is achieved by elevating the minimum pressure required to deflect the joystick rod. Test measurements encompass detection of obstacles along the primary directions perpendicular to the electric vehicle’s trajectory, determination of the maximum achievable speed, and evaluation of the joystick’s maximum operational range within an illuminated environment.
]]>Computers doi: 10.3390/computers13020047
Authors: Dimitrios Tolikas Evangelos D. Spyrou Vassilios Kappatos
COVID-19 has become a pandemic which has resulted in measures being taken for the health and safety of people. The spreading of this disease is particularly evident in indoor spaces, which tend to get overcrowded with people. One such place is the airport where a plethora of passengers gather in common places, such as coffee shops and duty-free shops as well as toilets and gates. Guiding the passengers to less overcrowded places within the airport may be a solution to reduce disease spread. In this paper, we suggest a passenger routing algorithm whereby the passengers are guided to less crowded places by using a weighting factor, which is minimised to accomplish the desired goal. We modeled a number of shops in an airport using the AnyLogic software and we tested the algorithm showing that the exposure time is less with routing and that people are appropriately spread out across the common spaces, thus preventing overcrowding. Finally, we added a real airport in Kavala, Greece to show the efficiency of our approach.
]]>Computers doi: 10.3390/computers13020046
Authors: Nikolaos Sergis Christos Troussas Akrivi Krouska Christina Tzortzi Georgios Bardis Cleo Sgouropoulou
The need for effective cognitive training methodologies has increased, particularly for individuals dealing with Attention Deficit Hyperactivity Disorder (ADHD). In response to this demand, Virtual Reality (VR) technology emerges as a promising tool to support cognitive functions. Addressing this imperative, our paper introduces ADHD Dog, a VR game designed to aid individuals with ADHD by harnessing the advancements in VR technology and cognitive science. Our approach integrates behavioral and sociocultural theories, alongside gamification, to foster player engagement and reinforce cognitive functions. The theories employed, including operant conditioning and social constructivism, are specifically chosen for their relevance to ADHD’s cognitive aspects and their potential to promote active and context-based engagement. ADHD Dog, grounded in the principles of neuroplasticity and behaviorist methods, distinguishes itself by utilizing technology to amplify cognitive functions, like impulse control, attention, and short-term memory. An evaluation by individuals with ADHD, psychologists and computer scientists yielded promising results, underscoring the significant contribution of blending narrative-driven gameplay with behavioral and sociocultural theories, along with gamification, to ADHD cognitive training.
]]>Computers doi: 10.3390/computers13020045
Authors: Andrzej Ożadowicz
Smart home and building systems are popular solutions that support maintaining comfort and safety and improve energy efficiency in buildings. However, dynamically developing distributed network technologies, in particular the Internet of Things (IoT), are increasingly entering the above-mentioned application areas of building automation, offering new functional possibilities. The result of these processes is the emergence of many different solutions that combine field-level and information and communications technology (ICT) networks in various configurations and architectures. New paradigms are also emerging, such as edge and fog computing, providing support for local monitoring and control networks in the implementation of advanced functions and algorithms, including machine learning and artificial intelligence mechanisms. This paper collects state-of-the-art information in these areas, providing a systematic review of the literature and case studies with an analysis of selected development trends. The author systematized this information in the context of the potential development of building automation systems. Based on the conclusions of this analysis and discussion, a framework for the development of the Generic IoT paradigm in smart home and building applications has been proposed, along with a strengths, weaknesses, opportunities, and threats (SWOT) analysis of its usability. Future works are proposed as well.
]]>Computers doi: 10.3390/computers13020044
Authors: Mohamed Kamel Benbraika Okba Kraa Yassine Himeur Khaled Telli Shadi Atalla Wathiq Mansoor
Device-to-Device (D2D) communication is an emerging technology that is vital for the future of cellular networks, including 5G and beyond. Its potential lies in enhancing system throughput, offloading the network core, and improving spectral efficiency. Therefore, optimizing resource and power allocation to reduce co-channel interference is crucial for harnessing these benefits. In this paper, we conduct a comparative study of meta-heuristic algorithms, employing Genetic Algorithms (GAs), Particle Swarm Optimization (PSO), Bee Life Algorithm (BLA), and a novel combination of matching techniques with BLA for joint channel and power allocation optimization. The simulation results highlight the effectiveness of bio-inspired algorithms in addressing these challenges. Moreover, the proposed amalgamation of the matching algorithm with BLA outperforms other meta-heuristic algorithms, namely, PSO, BLA, and GA, in terms of throughput, convergence speed, and achieving practical solutions.
]]>Computers doi: 10.3390/computers13020043
Authors: Lorenzo Porcelli Michele Mastroianni Massimo Ficco Francesco Palmieri
Despite growing concerns about privacy and an evolution in laws protecting users’ rights, there remains a gap between how industries manage data and how users can express their preferences. This imbalance often favors industries, forcing users to repeatedly define their privacy preferences each time they access a new website. This process contributes to the privacy paradox. We propose a user support tool named the User Privacy Preference Management System (UPPMS) that eliminates the need for users to handle intricate banners or deceptive patterns. We have set up a process to guide even a non-expert user in creating a standardized personal privacy policy, which is automatically applied to every visited website by interacting with cookie banners. The process of generating actions to apply the user’s policy leverages customized Large Language Models. Experiments demonstrate the feasibility of analyzing HTML code to understand and automatically interact with cookie banners, even implementing complex policies. Our proposal aims to address the privacy paradox related to cookie banners by reducing information overload and decision fatigue for users. It also simplifies user navigation by eliminating the need to repeatedly declare preferences in intricate cookie banners on every visited website, while protecting users from deceptive patterns.
]]>Computers doi: 10.3390/computers13020042
Authors: Beatriz Miranda Paula Alexandra Rego Luís Romero Pedro Miguel Moreira
Schizophrenia is a mental illness that requires the use of cognitive treatments to decrease symptoms in which the use of medication is less effective. Innovative strategies such as the use of Virtual Reality (VR) are being tested, but there is still a long way into developing solutions as effective as the current conventional forms of treatment. To study more effective forms of developing these systems, an immersive VR game with a tutorial and two levels of difficulty was developed. Tests were performed in twenty-one healthy subjects, showing promising results, indicating VR’s potential as a complementary approach to conventional treatments for schizophrenia. When properly applied, the use of VR could lead to more efficient and accessible treatments, potentially reducing its costs and reaching a broader population.
]]>Computers doi: 10.3390/computers13020041
Authors: Parisasadat Shojaei Elena Vlahu-Gjorgievska Yang-Wai Chow
Health information systems (HISs) have immense value for healthcare institutions, as they provide secure storage, efficient retrieval, insightful analysis, seamless exchange, and collaborative sharing of patient health information. HISs are implemented to meet patient needs, as well as to ensure the security and privacy of medical data, including confidentiality, integrity, and availability, which are necessary to achieve high-quality healthcare services. This systematic literature review identifies various technologies and methods currently employed to enhance the security and privacy of medical data within HISs. Various technologies have been utilized to enhance the security and privacy of healthcare information, such as the IoT, blockchain, mobile health applications, cloud computing, and combined technologies. This study also identifies three key security aspects, namely, secure access control, data sharing, and data storage, and discusses the challenges faced in each aspect that must be enhanced to ensure the security and privacy of patient information in HISs.
]]>Computers doi: 10.3390/computers13020040
Authors: Diego Johnson Brayan Mamani Cesar Salas
The impact of the COVID-19 pandemic on education has accelerated the shift in learning paradigms toward synchronous and asynchronous online approaches, significantly reducing students’ social interactions. This study introduces CollabVR, as a social virtual reality (SVR) platform designed to improve social interaction among remote university students through extracurricular activities (ECAs). Leveraging technologies such as Unity3D for the development of the SVR environment, Photon Unity Networking for real-time participant connection, Oculus Quest 2 for immersive virtual reality experience, and AWS for efficient and scalable system performance, it aims to mitigate this social interaction deficit. The platform was tested using the sociability scale of Kreijns et al., comparing it with traditional online platforms. Results from a focus group in Lima, Peru, with students participating in online ECAs, demonstrated that CollabVR significantly improved participants perceived social interaction, with a mean of 4.65 ± 0.49 compared to traditional platforms with a mean of 2.35 ± 0.75, fostering a sense of community and improving communication. The study highlights the potential of CollabVR as a powerful tool to overcome socialization challenges in virtual learning environments, suggesting a more immersive and engaging approach to distance education.
]]>Computers doi: 10.3390/computers13020039
Authors: Gordan Gledec Mladen Sokele Marko Horvat Miljenko Mikuc
This paper introduces a novel approach to the creation and application of confusion matrices for error pattern discovery in spellchecking for the Croatian language. The experimental dataset has been derived from a corpus of mistyped words and user corrections collected since 2008 using the Croatian spellchecker available at ispravi.me. The important role of confusion matrices in enhancing the precision of spellcheckers, particularly within the diverse linguistic context of the Croatian language, is investigated. Common causes of spelling errors, emphasizing the challenges posed by diacritic usage, have been identified and analyzed. This research contributes to the advancement of spellchecking technologies and provides a more comprehensive understanding of linguistic details, particularly in languages with diacritic-rich orthographies, like Croatian. The presented user-data-driven approach demonstrates the potential for custom spellchecking solutions, especially considering the ever-changing dynamics of language use in digital communication.
]]>Computers doi: 10.3390/computers13020038
Authors: Mulugeta Adibaru Kiflie Durga Prasad Sharma Mesfin Abebe Haile Ramasamy Srinivasagan
Ethiopia is renowned for its rich biodiversity, supporting a diverse variety of medicinal plants with significant potential for therapeutic applications. In regions where modern healthcare facilities are scarce, traditional medicine emerges as a cost-effective and culturally aligned primary healthcare solution in developing countries. In Ethiopia, the majority of the population, around 80%, and for a significant proportion of their livestock, approximately 90% continue to prefer traditional medicine as their primary healthcare option. Nevertheless, the precise identification of specific plant parts and their associated uses has posed a formidable challenge due to the intricate nature of traditional healing practices. To address this challenge, we employed a majority based ensemble deep learning approach to identify medicinal plant parts and uses of Ethiopian indigenous medicinal plant species. The primary objective of this research is to achieve the precise identification of the parts and uses of Ethiopian medicinal plant species. To design our proposed model, EfficientNetB0, EfficientNetB2, and EfficientNetB4 were used as benchmark models and applied as a majority vote-based ensemble technique. This research underscores the potential of ensemble deep learning and transfer learning methodologies to accurately identify the parts and uses of Ethiopian indigenous medicinal plant species. Notably, our proposed EfficientNet-based ensemble deep learning approach demonstrated remarkable accuracy, achieving a significant test and validation accuracy of 99.96%. Future endeavors will prioritize expanding the dataset, refining feature-extraction techniques, and creating user-friendly interfaces to overcome current dataset limitations.
]]>Computers doi: 10.3390/computers13020037
Authors: Karwan Mahdi Hama Hama Rawf Ayub Othman Abdulrahman Aree Ali Mohammed
The deaf society supports Sign Language Recognition (SLR) since it is used to educate individuals in communication, education, and socialization. In this study, the results of using the modified Convolutional Neural Network (CNN) technique to develop a model for real-time Kurdish sign recognition are presented. Recognizing the Kurdish alphabet is the primary focus of this investigation. Using a variety of activation functions over several iterations, the model was trained and then used to make predictions on the KuSL2023 dataset. There are a total of 71,400 pictures in the dataset, drawn from two separate sources, representing the 34 sign languages and alphabets used by the Kurds. A large collection of real user images is used to evaluate the accuracy of the suggested strategy. A novel Kurdish Sign Language (KuSL) model for classification is presented in this research. Furthermore, the hand region must be identified in a picture with a complex backdrop, including lighting, ambience, and image color changes of varying intensities. Using a genuine public dataset, real-time classification, and personal independence while maintaining high classification accuracy, the proposed technique is an improvement over previous research on KuSL detection. The collected findings demonstrate that the performance of the proposed system offers improvements, with an average training accuracy of 99.05% for both classification and prediction models. Compared to earlier research on KuSL, these outcomes indicate very strong performance.
]]>Computers doi: 10.3390/computers13020036
Authors: Manar Khalid Ibraheem Ibraheem Mbarka Belhaj Mohamed Ahmed Fakhfakh
In the past ten years, rates of forest fires around the world have increased significantly. Forest fires greatly affect the ecosystem by damaging vegetation. Forest fires are caused by several causes, including both human and natural causes. Human causes lie in intentional and irregular burning operations. Global warming is a major natural cause of forest fires. The early detection of forest fires reduces the rate of their spread to larger areas by speeding up their extinguishing with the help of equipment and materials for early detection. In this research, an early detection system for forest fires is proposed called Forest Defender Fusion. This system achieved high accuracy and long-term monitoring of the site by using the Intermediate Fusion VGG16 model and Enhanced Consumed Energy-Leach protocol (ECP-LEACH). The Intermediate Fusion VGG16 model receives RGB (red, green, blue) and IR (infrared) images from drones to detect forest fires. The Forest Defender Fusion System provides regulation of energy consumption in drones and achieves high detection accuracy so that forest fires are detected early. The detection model was trained on the FLAME 2 dataset and obtained an accuracy of 99.86%, superior to the rest of the models that track the input of RGB and IR images together. A simulation using the Python language to demonstrate the system in real time was performed.
]]>Computers doi: 10.3390/computers13020035
Authors: Nuno Verdelho Trindade Pedro Leitão Daniel Gonçalves Sérgio Oliveira Alfredo Ferreira
Dam safety control is a multifaceted activity that requires analysis, monitoring, and structural behavior prediction. It entails interpreting vast amounts of data from sensor networks integrated into dam structures. The application of extended reality technologies for situated immersive analysis allows data to be contextualized directly over the physical referent. Such types of visual contextualization have been known to improve analytical reasoning and decision making. This study presents DamVR, a virtual reality tool for off-site, proxied situated structural sensor data visualization. In addition to describing the tool’s features, it evaluates usability and usefulness with a group of 22 domain experts. It also compares its performance with an existing augmented reality tool for the on-site, immediate situated visualization of structural data. Participant responses to a survey reflect a positive assessment of the proxied situated approach’s usability and usefulness. This approach shows a decrease in performance (task completion time and errors) for more complex tasks but no significant differences in user experience scores when compared to the immediate situated approach. The findings indicate that while results may depend strongly on factors such as the realism of the virtual environment, the immediate physical referent offered some advantages over the proxied one in the contextualization of data.
]]>Computers doi: 10.3390/computers13020034
Authors: Michalis Panayides Andreas Artemiou
In this paper, we propose a Support Vector Machine (SVM)-type algorithm, which is statistically faster among other common algorithms in the family of SVM algorithms. The new algorithm uses distributional information of each class and, therefore, combines the benefits of using the class variance in the optimization with the least squares approach, which gives an analytic solution to the minimization problem and, therefore, is computationally efficient. We demonstrate an important property of the algorithm which allows us to address the inversion of a singular matrix in the solution. We also demonstrate through real data experiments that we improve on the computational time without losing any of the accuracy when compared to previously proposed algorithms.
]]>Computers doi: 10.3390/computers13020033
Authors: Willy Scheibel Jasper Blum Franziska Lauterbach Daniel Atzberger Jürgen Döllner
Readily available software analysis and analytics tools are often operated within external services, where the measured software analysis data are kept internally and no external access to the data is available. We propose an approach to integrate visual software analysis on the GitHub platform by leveraging GitHub Actions and the GitHub API, covering both analysis and visualization. The process is to perform software analysis for each commit, e.g., static source code complexity metrics, and augment the commit using the resulting data, stored as git objects within the same repository. We show that this approach is feasible by integrating it into 64 open source TypeScript projects. Furthermore, we analyze the impact on Continuous Integration (CI) run time and repository storage. The stored software analysis data are externally accessible to allow for visualization tools, such as software maps. The effort to integrate our approach is limited to enabling the analysis component within a project’s CI on GitHub and embed an HTML snippet into the project’s website for visualization. This enables a large amount of projects to have access to software analysis as well as provide means to communicate the current status of a project.
]]>Computers doi: 10.3390/computers13020032
Authors: Christoph Polle Stefan Bosse Axel S. Herrmann
Machine learning techniques such as deep learning have already been successfully applied in Structural Health Monitoring (SHM) for damage localization using Ultrasonic Guided Waves (UGW) at various temperatures. However, a common issue arises due to the time-consuming nature of collecting guided wave measurements at different temperatures, resulting in an insufficient amount of training data. Since SHM systems are predominantly employed in sensitive structures, there is a significant interest in utilizing methods and algorithms that are transparent and comprehensible. In this study, a method is presented to augment feature data by generating a large number of training features from a relatively limited set of measurements. In addition, robustness to environmental changes, e.g., temperature fluctuations, is improved. This is achieved by utilizing a known temperature compensation method called temperature scaling to determine the function of signal features as a function of temperature. These functions can then be used for data generation. To gain a better understanding of how the damage localization predictions are made, a known explainable neural network (XANN) architecture is employed and trained with the generated data. The trained XANN model was then used to examine and validate the artificially generated signal features and to improve the augmentation process. The presented method demonstrates a significant increase in the number of training data points. Furthermore, the use of the XANN architecture as a predictor model enables a deeper interpretation of the prediction methods employed by the network.
]]>Computers doi: 10.3390/computers13010031
Authors: Muhammad Asad Arshed Shahzad Mumtaz Muhammad Ibrahim Christine Dewi Muhammad Tanveer Saeed Ahmed
In response to the rapid advancements in facial manipulation technologies, particularly facilitated by Generative Adversarial Networks (GANs) and Stable Diffusion-based methods, this paper explores the critical issue of deepfake content creation. The increasing accessibility of these tools necessitates robust detection methods to curb potential misuse. In this context, this paper investigates the potential of Vision Transformers (ViTs) for effective deepfake image detection, leveraging their capacity to extract global features. Objective: The primary goal of this study is to assess the viability of ViTs in detecting multiclass deepfake images compared to traditional Convolutional Neural Network (CNN)-based models. By framing the deepfake problem as a multiclass task, this research introduces a novel approach, considering the challenges posed by Stable Diffusion and StyleGAN2. The objective is to enhance understanding and efficacy in detecting manipulated content within a multiclass context. Novelty: This research distinguishes itself by approaching the deepfake detection problem as a multiclass task, introducing new challenges associated with Stable Diffusion and StyleGAN2. The study pioneers the exploration of ViTs in this domain, emphasizing their potential to extract global features for enhanced detection accuracy. The novelty lies in addressing the evolving landscape of deepfake creation and manipulation. Results and Conclusion: Through extensive experiments, the proposed method exhibits high effectiveness, achieving impressive detection accuracy, precision, and recall, and an F1 rate of 99.90% on a multiclass-prepared dataset. The results underscore the significant potential of ViTs in contributing to a more secure digital landscape by robustly addressing the challenges posed by deepfake content, particularly in the presence of Stable Diffusion and StyleGAN2. The proposed model outperformed when compared with state-of-the-art CNN-based models, i.e., ResNet-50 and VGG-16.
]]>Computers doi: 10.3390/computers13010030
Authors: Changxing Chen Afizan Azman
This study introduces a novel approach to address challenges in workpiece surface defect identification. It presents an enhanced Single Shot MultiBox Detector model, incorporating attention mechanisms and multi-feature fusion. The research methodology involves carefully curating a dataset from authentic on-site factory production, enabling the training of a model with robust real-world generalization. Leveraging the Single Shot MultiBox Detector model lead to improvements integrating channel and spatial attention mechanisms in the feature extraction network. Diverse feature extraction methods enhance the network’s focus on crucial information, improving its defect detection efficacy. The proposed model achieves a significant Mean Average Precision (mAP) improvement, reaching 99.98% precision, a substantial 3% advancement over existing methodologies. Notably, the proposed model exhibits a tendency for the values of the P-R curves in object detection for each category to approach 1, which allows a better balance between the requirements of real-time detection and precision. Within the threshold range of 0.2 to 1, the model maintains a stable level of precision, consistently remaining between 0.99 and 1. In addition, the average running speed is 2 fps lower compared to other models, and the reduction in detection speed after the model improvement is kept within 1%. The experimental results indicate that the model excels in pixel-level defect identification, which is crucial for precise defect localization. Empirical experiments validate the algorithm’s superior performance. This research represents a pivotal advancement in workpiece surface defect identification, combining technological innovation with practical efficacy.
]]>Computers doi: 10.3390/computers13010029
Authors: Carles Igual Alberto Castillo Jorge Igual
Electromyography-based wearable biosensors are used for prosthetic control. Machine learning prosthetic controllers are based on classification and regression models. The advantage of the regression approach is that it permits us to obtain a smoother and more natural controller. However, the existing training methods for regression-based solutions is the same as the training protocol used in the classification approach, where only a finite set of movements are trained. In this paper, we present a novel training protocol for myoelectric regression-based solutions that include a feedback term that allows us to explore more than a finite set of movements and is automatically adjusted according to real-time performance of the subject during the training session. Consequently, the algorithm distributes the training time efficiently, focusing on the movements where the performance is worse and optimizing the training for each user. We tested and compared the existing and new training strategies in 20 able-bodied participants and 4 amputees. The results show that the novel training procedure autonomously produces a better training session. As a result, the new controller outperforms the one trained with the existing method: for the able-bodied participants, the average number of targets hit is increased from 86% to 95% and the path efficiency from 40% to 84%, while for the subjects with limb deficiencies, the completion rate is increased from 58% to 69% and the path efficiency from 24% to 56%.
]]>Computers doi: 10.3390/computers13010028
Authors: Loan Nguyen Sarath Tomy Eric Pardede
The requirement to develop a smart education system is critical in the era of ubiquitous technology. In the smart education environment, intelligent pedagogies are constructed to take advantage of technological devices and foster learners’ competencies which undoubtedly assist learners in dealing with knowledge and handling issues in a dynamic society more effectively and productively. This research suggests two effective learning strategies: (1) collaborative learning, which helps learners improve their knowledge and skills by exchanging resources and experiences, and (2) e-mentoring, which connects learners to a wide range of professional communities. This research first proposes a model to show how these two learning methods help learners achieve their goals, along with a set of hypotheses that are explained in detail. Then, a smart education system is proposed which comprises the two learning strategies with the necessary features. Lastly, two questionnaires, one for facilitators and the other for learners, are used to evaluate the usefulness and the feasibility of the proposed model in a real-world educational environment. The great majority of respondents agreed with all the statements, demonstrating the efficiency of the research for educators and learners.
]]>Computers doi: 10.3390/computers13010027
Authors: Latifa Albshaier Seetah Almarri M. Hafizur Rahman
The Internet’s expansion has changed how the services accessed and businesses operate. Blockchain is an innovative technology that emerged after the rise of the Internet. In addition, it maintains transactions on encrypted databases that are distributed among many computer networks, much like digital ledgers for online transactions. This technology has the potential to establish a decentralized marketplace for Internet retailers. Sensitive information, like customer data and financial statements, should be routinely transferred via e-commerce. As a result, the system becomes a prime target for cybercriminals seeking illegal access to data. As e-commerce increases, so does the frequency of hacker attacks that raise concerns about the safety of e-commerce platforms’ databases. Owing to the sensitivity of customer data, employee records, and customer records, organizations must ensure their protection. A data breach not only affects an enterprise’s financial performance but also erodes clients’ confidence in the platform. Currently, e-commerce businesses face numerous challenges, including the security of the e-commerce system, transparency and trust in its effectiveness. A solution to these issues is the application of blockchain technology in the e-commerce industry. Blockchain technology simplifies fraud detection and investigation by recording transactions and accompanying data. Blockchain technology enables transaction tracking by creating a detailed record of all the related data, which can assist in identifying and preventing fraud in the future. Using blockchain cryptocurrency will record the sender’s address, recipient’s address, amount transferred, and timestamp, which creates an immutable and transparent ledger of all transaction data.
]]>Computers doi: 10.3390/computers13010026
Authors: Kunbolat Algazy Kairat Sakan Ardabek Khompysh Dilmukhanbet Dyusenbayev
The distinguishing feature of hash-based algorithms is their high confidence in security. When designing electronic signature schemes, proofs of security reduction to certain properties of cryptographic hash functions are used. This means that if the scheme is compromised, then one of these properties will be violated. It is important to note that the properties of cryptographic hash functions have been studied for many years, but if a specific hash function used in a protocol turns out to be insecure, it can simply be replaced with another one while keeping the overall construction unchanged. This article describes a new post-quantum signature algorithm, Syrga-1, based on a hash function. This algorithm is designed to sign r messages with a single secret key. One of the key primitives of the signature algorithm is a cryptographic hash function. The proposed algorithm uses the HAS01 hashing algorithm developed by researchers from the Information Security Laboratory of the Institute of Information and Computational Technologies. The security and efficiency of the specified hash algorithm have been demonstrated in other articles by its authors. Hash-based signature schemes are attractive as post-quantum signature schemes because their security can be quantified, and their security has been proven.
]]>Computers doi: 10.3390/computers13010025
Authors: Olusola Adeniyi Ali Safaa Sadiq Prashant Pillai Mohammad Aljaidi Omprakash Kaiwartya
In recent years, Mobile Edge Computing (MEC) has revolutionized the landscape of the telecommunication industry by offering low-latency, high-bandwidth, and real-time processing. With this advancement comes a broad range of security challenges, the most prominent of which is Distributed Denial of Service (DDoS) attacks, which threaten the availability and performance of MEC’s services. In most cases, Intrusion Detection Systems (IDSs), a security tool that monitors networks and systems for suspicious activity and notify administrators in real time of potential cyber threats, have relied on shallow Machine Learning (ML) models that are limited in their abilities to identify and mitigate DDoS attacks. This article highlights the drawbacks of current IDS solutions, primarily their reliance on shallow ML techniques, and proposes a novel hybrid Autoencoder–Multi-Layer Perceptron (AE–MLP) model for intrusion detection as a solution against DDoS attacks in the MEC environment. The proposed hybrid AE–MLP model leverages autoencoders’ feature extraction capabilities to capture intricate patterns and anomalies within network traffic data. This extracted knowledge is then fed into a Multi-Layer Perceptron (MLP) network, enabling deep learning techniques to further analyze and classify potential threats. By integrating both AE and MLP, the hybrid model achieves higher accuracy and robustness in identifying DDoS attacks while minimizing false positives. As a result of extensive experiments using the recently released NF-UQ-NIDS-V2 dataset, which contains a wide range of DDoS attacks, our results demonstrate that the proposed hybrid AE–MLP model achieves a high accuracy of 99.98%. Based on the results, the hybrid approach performs better than several similar techniques.
]]>Computers doi: 10.3390/computers13010024
Authors: Angeliki Voreopoulou Stylianos Mystakidis Avgoustos Tsinakos
A significant volume of literature has extensively reported on and presented the benefits of employing escape classroom games (ECGs), on one hand, and on augmented reality (AR) in English language learning, on the other. However, there is little evidence on how AR-powered ECGs can enhance deep and meaningful foreign language learning. Hence, this study presents the design, development and user evaluation of an innovative augmented reality escape classroom game created for teaching English as a foreign language (EFL). The game comprises an imaginative guided group tour around the Globe Theatre in London that is being disrupted by Shakespeare’s ghost. The game was evaluated by following a qualitative research method that depicts the in-depth perspectives of ten in-service English language teachers. The data collection instruments included a 33-item questionnaire and semi-structured interviews. The findings suggest that this escape game is a suitable pedagogical tool for deep and meaningful language learning and that it can raise cultural awareness, while enhancing vocabulary retention and the development of receptive and productive skills in English. Students’ motivation and satisfaction levels toward language learning are estimated to remain high due to the game’s playful nature, its interactive elements, as well as the joyful atmosphere created through active communication, collaboration, creativity, critical thinking and peer work. This study provides guidelines and support for the design and development of similar augmented reality escape classroom games (ARECGs) to improve teaching practices and foreign language education.
]]>Computers doi: 10.3390/computers13010023
Authors: Kaleb Horvath Mohamed Riduan Abid Thomas Merino Ryan Zimmerman Yesem Peker Shamim Khan
We have designed a real-world smart building energy fault detection (SBFD) system on a cloud-based Databricks workspace, a high-performance computing (HPC) environment for big-data-intensive applications powered by Apache Spark. By avoiding a Smart Building Diagnostics as a Service approach and keeping a tightly centralized design, the rapid development and deployment of the cloud-based SBFD system was achieved within one calendar year. Thanks to Databricks’ built-in scheduling interface, a continuous pipeline of real-time ingestion, integration, cleaning, and analytics workflows capable of energy consumption prediction and anomaly detection was implemented and deployed in the cloud. The system currently provides fault detection in the form of predictions and anomaly detection for 96 buildings on an active military installation. The system’s various jobs all converge within 14 min on average. It facilitates the seamless interaction between our workspace and a cloud data lake storage provided for secure and automated initial ingestion of raw data provided by a third party via the Secure File Transfer Protocol (SFTP) and BLOB (Binary Large Objects) file system secure protocol drivers. With a powerful Python binding to the Apache Spark distributed computing framework, PySpark, these actions were coded into collaborative notebooks and chained into the aforementioned pipeline. The pipeline was successfully managed and configured throughout the lifetime of the project and is continuing to meet our needs in deployment. In this paper, we outline the general architecture and how it differs from previous smart building diagnostics initiatives, present details surrounding the underlying technology stack of our data pipeline, and enumerate some of the necessary configuration steps required to maintain and develop this big data analytics application in the cloud.
]]>Computers doi: 10.3390/computers13010022
Authors: Tamás Aladics Péter Hegedűs Rudolf Ferenc
With the evolution of software systems, their size and complexity are rising rapidly. Identifying vulnerabilities as early as possible is crucial for ensuring high software quality and security. Just-in-time (JIT) vulnerability prediction, which aims to find vulnerabilities at the time of commit, has increasingly become a focus of attention. In our work, we present a comparative study to provide insights into the current state of JIT vulnerability prediction by examining three candidate models: CC2Vec, DeepJIT, and Code Change Tree. These unique approaches aptly represent the various techniques used in the field, allowing us to offer a thorough description of the current limitations and strengths of JIT vulnerability prediction. Our focus was on the predictive power of the models, their usability in terms of false positive (FP) rates, and the granularity of the source code analysis they are capable of handling. For training and evaluation, we used two recently published datasets containing vulnerability-inducing commits: ProjectKB and Defectors. Our results highlight the trade-offs between predictive accuracy and operational flexibility and also provide guidance on the use of ML-based automation for developers, especially considering false positive rates in commit-based vulnerability prediction. These findings can serve as crucial insights for future research and practical applications in software security.
]]>Computers doi: 10.3390/computers13010021
Authors: Sabrine Belmekki Dominique Gruyer
In the dynamic landscape of vehicular communication systems, connected vehicles (CVs) present unprecedented capabilities in perception, cooperation, and, notably, probability of collision management. This paper’s main concern is the collision probability of collision estimation. Achieving effective collision estimation heavily relies on the sensor perception of obstacles and a critical collision probability prediction system. This paper is dedicated to refining the estimation of collision probability through the intentional integration of CV communications, with a specific focus on the collective perception of connected vehicles. The primary objective is to enhance the understanding of the potential probability of collisions in the surrounding environment by harnessing the collective insights gathered through inter-vehicular communication and collaboration. This improvement enables a superior anticipation capacity for both the driving system and the human driver, thereby enhancing road safety. Furthermore, the incorporation of extended perception strategies holds the potential for more accurate collision probability estimation, providing the driving system or human driver with increased time to react and make informed decisions, further fortifying road safety measures. The results underscore a significant enhancement in collision probability awareness, as connected vehicles collectively contribute to a more comprehensive collision probability landscape. Consequently, this heightened collective collision probability perception improves the anticipation capacity of both the driving system and the human driver, contributing to an elevated level of road safety. For future work, the exploration of our extended perception techniques to achieve real-time probability of collision estimation is proposed. Such endeavors aim to drive the development of robust and anticipatory autonomous driving systems that truly harness the benefits of connected vehicle technologies.
]]>Computers doi: 10.3390/computers13010020
Authors: Faraz Sasani Mohammad Moghareh Dehkordi Zahra Ebrahimi Hakimeh Dustmohammadloo Parisa Bouzari Pejman Ebrahimi Enikő Lencsés Mária Fekete-Farkas
Liquidity is the ease of converting an asset (physical/digital) into cash or another asset without loss and is shown by the relationship between the time scale and the price scale of an investment. This article examines the illiquidity of Bitcoin (BTC). Bitcoin hash rate information was collected at three different time intervals; parallel to these data, textual information related to these intervals was collected from Twitter for each day. Due to the regression nature of illiquidity prediction, approaches based on recurrent networks were suggested. Seven approaches: ANN, SVM, SANN, LSTM, Simple RNN, GRU, and IndRNN, were tested on these data. To evaluate these approaches, three evaluation methods were used: random split (paper), random split (run) and linear split (run). The research results indicate that the IndRNN approach provided better results.
]]>Computers doi: 10.3390/computers13010019
Authors: Mónica Cruz Abílio Oliveira Alessandro Pinheiro
With the evolution of technologies, virtual reality allows us to dive into cyberspace through different devices and have immersive experiences in different contexts, which, in a simple way, we call virtual worlds or multiverse (integrating Metaverse versions). Through virtual reality, it is possible to create infinite simulated environments to immerse ourselves in. Future internet may be slightly different from what we use today. Virtual immersion situations are common (particularly in gaming), and the Metaverse has become a lived and almost real experience claiming its presence in our daily lives. To investigate possible perspectives or concepts regarding the Metaverse, virtual reality, and immersion, we considered a main research question: To what extent can a film centered on the multiverse be associated with adults’ Metaverse perceptions? Considering that all participants are adults, the objectives of this study are: (1) Verify the representations of the Metaverse; (2) Verify the representations of immersion; (3) Verify the representations of the multiverse; (4) Verify the importance of a film (related to the Metaverse and the multiverse) on the representations found. This study—framed in a Ph.D. research project—analyzed the participants’ answers through an online survey using two films to gather thoughts, ideas, emotions, sentiments, and reactions according to our research objectives. Some limitations were considered, such as the number of participants, number of the questionnaire questions and the knowledge or lack of the main concepts. Our results showed that a virtual world created by a movie might stimulate the perception of almost living in that supposed reality, accepting the multiverse and Metaverse not as distant concepts but as close experiences, even in an unconscious form. This finding is also a positive contribution to a discussion in progress aiming for an essential understanding of the Metaverse as a complex concept.
]]>Computers doi: 10.3390/computers13010018
Authors: Raja Rao Budaraju Sastry Kodanda Rama Jammalamadaka
Many data mining studies have focused on mining positive associations among frequent and regular item sets. However, none have considered time and regularity bearing in mind such associations. The frequent and regular item sets will be huge, even when regularity and frequency are considered without any time consideration. Negative associations are equally important in medical databases, reflecting considerable discrepancies in medications used to treat various disorders. It is important to find the most effective negative associations. The mined associations should be as small as possible so that the most important disconnections can be found. This paper proposes a mining method that mines medical databases to find regular, frequent, closed, and maximal item sets that reflect minimal negative associations. The proposed algorithm reduces the negative associations by 70% when the maximal and closed properties have been used, considering any sample size, regularity, or frequency threshold.
]]>Computers doi: 10.3390/computers13010017
Authors: Dimitris Zeginis Konstantinos Tarabanis
In a continuously evolving environment, organizations, including public administrations, need to quickly adapt to change and make decisions in real-time. This requires having a real-time understanding of their context that can be achieved by adopting an event-native mindset in data management which focuses on the dynamics of change compared to the state-based traditional approaches. In this context, this paper proposes the adoption of an event-centric knowledge graph approach for the holistic data management of all data repositories in public administration. Towards this direction, the paper proposes an event-centric knowledge graph model for the domain of public administration that captures these dynamics considering events as first-class entities for knowledge representation. The development of the model is based on a state-of-the-art analysis of existing event-centric knowledge graph models that led to the identification of core concepts related to event representation, on a state-of-the-art analysis of existing public administration models that identified the core entities of the domain, and on a theoretical analysis of concepts related to events, public services, and effective public administration in order to outline the context and identify the domain-specific needs for event modeling. Further, the paper applies the model in the context of Greek public administration in order to validate it and showcase the possibilities that arise. The results show that the adoption of event-centric knowledge graph approaches for data management in public administration can facilitate data analytics, continuous integration, and the provision of a 360-degree-view of end-users. We anticipate that the proposed approach will also facilitate real-time decision-making, continuous intelligence, and ubiquitous AI.
]]>Computers doi: 10.3390/computers13010016
Authors: Pedro Pablo Garrido Abenza Manuel P. Malumbres Pablo Piñol Otoniel López-Granado
When working with the Wireless Access in Vehicular Environment (WAVE) protocol stack, the multi-channel operation mechanism of the IEEE 1609.4 protocol may impact the overall network performance, especially when using video streaming applications. In general, packets delivered from the application layer during a Control Channel (CCH) time slot have to wait for transmission until the next Service Channel (SCH) time slot arrives. The accumulation of packets at the beginning of the latter time slot may introduce additional delays and higher contention when all the network nodes try, at the same time, to obtain access to the shared channel in order to send the delayed packets as soon as possible. In this work, we have analyzed these performance issues and proposed a new method, which we call SkipCCH, that helps the MAC layer to overcome the high contention produced by the packet transmission bursts at the beginning of every SCH slot. This high contention implies an increase in the number of packet losses, which directly impacts the overall network performance. With our proposal, streaming video in vehicular networks will provide a better quality of reconstructed video at the receiver side under the same network conditions. Furthermore, this method has particularly proven its benefits when working with Quality of Service (QoS) techniques, not only by increasing the received video quality but also because it avoids starvation of the lower-priority traffic.
]]>Computers doi: 10.3390/computers13010015
Authors: Dillip Ranjan Nayak Neelamadhab Padhy Pradeep Kumar Mallick Dilip Kumar Bagal Sachin Kumar
Figure 1 was reproduced without the correct copyright permissions from the copyright holder (Medical Sciences) [...]
]]>Computers doi: 10.3390/computers13010014
Authors: Dimitrios Stamatakis Dimitrios G. Kogias Pericles Papadopoulos Panagiotis A. Karkazis Helen C. Leligou
The advancement and acceptance of new technologies often hinges on the level of understanding and trust among potential users. Blockchain technology, despite its broad applications across diverse sectors, is often met with skepticism due to a general lack of understanding and incidents of illicit activities in the cryptocurrency domain. This study aims to demystify blockchain technology by providing an in-depth examination of its application in a novel blockchain-based card game, centered around renewable energy and sustainable resource management. This paper introduces a serious game that uses blockchain to enhance user interaction, ownership, and gameplay, demonstrating the technology’s potential to revolutionize the gaming industry. Notable aspects of the game, such as ownership of virtual assets, transparent transaction histories, trustless game mechanics, user-driven content creation, gasless transactions, and mechanisms for in-game asset trading and cross-platform asset reuse are analyzed. The paper discusses how these features, not only provide a richer gaming experience but also serve as effective tools for raising awareness about sustainable energy and resource management, thereby bridging the gap between entertainment and education. The case study offers valuable insights into how blockchain can create dynamic, secure, and participatory virtual environments, shifting the paradigm of traditional online gaming.
]]>Computers doi: 10.3390/computers13010013
Authors: Luis Manuel Pereira Addisson Salazar Luis Vergara
Automatic data fusion is an important field of machine learning that has been increasingly studied. The objective is to improve the classification performance from several individual classifiers in terms of accuracy and stability of the results. This paper presents a comparative study on recent data fusion methods. The fusion step can be applied at early and/or late stages of the classification procedure. Early fusion consists of combining features from different sources or domains to form the observation vector before the training of the individual classifiers. On the contrary, late fusion consists of combining the results from the individual classifiers after the testing stage. Late fusion has two setups, combination of the posterior probabilities (scores), which is called soft fusion, and combination of the decisions, which is called hard fusion. A theoretical analysis of the conditions for applying the three kinds of fusion (early, late, and late hard) is introduced. Thus, we propose a comparative analysis with different schemes of fusion, including weaknesses and strengths of the state-of-the-art methods studied from the following perspectives: sensors, features, scores, and decisions.
]]>Computers doi: 10.3390/computers13010012
Authors: Ju Zhang Bin Chen Jiahui Qiu Lingfan Zhuang Zhiyuan Wang Liu Liu
In recent years, Long-Term Evolution Vehicle-to-Everything (LTE-V2X) communication technology has received extensive attention. Timing synchronization is a crucial step in the receiving process, addressing Timing Offsets (TOs) resulting from random propagation delays, sampling frequency mismatches between the transmitter and receiver or a combination of both. However, the presence of high-speed relative movement between nodes and a low antenna height leads to a significant Doppler frequency offset, resulting in a low Signal-to-Noise Ratio (SNR) for received signals in LTE-V2X communication scenarios. This paper aims to investigate LTE-V2X technology with a specific focus on time synchronization. The research centers on the time synchronization method utilizing the Primary Sidelink Synchronization Signal (PSSS) and conducts a comprehensive analysis of existing algorithms, highlighting their respective advantages and disadvantages. On this basis, a robust timing synchronization algorithm for LTE-V2X communication scenarios is proposed. The algorithm comprises three key steps: coarse synchronization, frequency offset estimation and fine synchronization. Enhanced robustness is achieved through algorithm fusion, optimal decision threshold design and predefined frequency offset values. Furthermore, a hardware-in-the-loop simulation platform is established. The simulation results demonstrate a substantial performance improvement for the proposed algorithm compared to existing methods under adverse channel conditions characterized by high frequency offsets and low SNR.
]]>Computers doi: 10.3390/computers13010011
Authors: Lahlou Imane Motaki Noureddine Sarsri Driss L’yarfi Hanane
In the face of numerous challenges in supply chain management, new technologies are being implemented to overcome obstacles and improve overall performance. Among these technologies, blockchain, a part of the distributed ledger family, offers several advantages when integrated with ERP systems, such as transparency, traceability, and data security. However, blockchain remains a novel, complex, and costly technology. The purpose of this paper is to guide decision-makers in determining whether integrating blockchain technology with ERP systems is appropriate during the pre-implementation phase. This paper focuses on the literature reviews, theories, and expert opinions to achieve its objectives. It first provides an overview of blockchain technology, then discusses its potential benefits to the supply chain, and finally proposes a framework to assist decision-makers in determining whether blockchain meets the needs of their consortium and whether this integration aligns with available resources. The results highlight the complexity of blockchain, the importance of detailed and in-depth research in deciding whether to integrate blockchain technology into ERP systems, and future research prospects. The findings of this article also present the critical decisions to be made prior to the implementation of blockchain, in the event that decision-makers choose to proceed with blockchain integration. The findings of this article augment the existing literature and can be applied in real-world contexts by stakeholders involved in blockchain integration projects with ERP systems.
]]>Computers doi: 10.3390/computers13010010
Authors: Jhon Fernando Sánchez-Álvarez Gloria Patricia Jaramillo-Álvarez Jovani Alberto Jiménez-Builes
Augmentative and alternative communication techniques (AAC) are essential to assist individuals facing communication difficulties. (1) Background: It is acknowledged that dynamic solutions that adjust to the changing needs of patients are necessary in the context of neuromuscular diseases. (2) Methods: In order address this concern, a differential approach was suggested that entailed the prior identification of the disease state. This approach employs fuzzy logic to ascertain the disease stage by analyzing intuitive patterns; it is contrasted with two intelligent systems. (3) Results: The results indicate that the AAC system’s adaptability enhances with the progression of the disease’s phases, thereby ensuring its utility throughout the lifespan of the individual. Although the adaptive AAC system exhibits signs of improvement, an expanded assessment involving a greater number of patients is required. (4) Conclusions: Qualitative assessments of comparative studies shed light on the difficulties associated with enhancing accuracy and adaptability. This research highlights the significance of investigating the use of fuzzy logic or artificial intelligence methods in order to solve the issue of symptom variability in disease staging.
]]>Computers doi: 10.3390/computers13010009
Authors: Lucas Daudt Franck Gabriel Augusto Ginja João Paulo Carmo José A. Afonso Maximiliam Luppe
The growth of digital communications has driven the development of numerous cryptographic methods for secure data transfer and storage. The SHA-256 algorithm is a cryptographic hash function widely used for validating data authenticity, identity, and integrity. The inherent SHA-256 computational overhead has motivated the search for more efficient hardware solutions, such as application-specific integrated circuits (ASICs). This work presents a custom ASIC hardware accelerator for the SHA-256 algorithm entirely created using open-source electronic design automation tools. The integrated circuit was synthesized using SkyWater SKY130 130 nm process technology through the OpenLANE automated workflow. The proposed final design is compatible with 32-bit microcontrollers, has a total area of 104,585 µm2, and operates at a maximum clock frequency of 97.9 MHz. Several optimization configurations were tested and analyzed during the synthesis phase to enhance the performance of the final design.
]]>Computers doi: 10.3390/computers13010008
Authors: Simon Lohmann Dietmar Tutsch
We present a hardware data structure specifically designed for FPGAs that enables the execution of the hard real-time database CRUD operations using a hybrid data structure that combines trees and rings. While the number of rows and columns has to be limited for hard real-time execution, the actual content can be of any size. Our structure restricts full navigational freedom to every but the leaf layer, thus keeping the memory overhead for the data stored in the leaves low. Although its nodes differ in function, all have exactly the same size and structure, reducing the number of cascaded decisions required in the database operations. This enables fast and efficient hardware implementation on FPGAs. In addition to the usual comparison with known data structures, we also analyze the tradeoff between the memory consumption of our approach and a simplified version that is doubly linked in all layers.
]]>Computers doi: 10.3390/computers13010007
Authors: Yuan Zhang Meysam Effati Aaron Hao Tan Goldie Nejat
Wearing masks in indoor and outdoor public places has been mandatory in a number of countries during the COVID-19 pandemic. Correctly wearing a face mask can reduce the transmission of the virus through respiratory droplets. In this paper, a novel two-step deep learning (DL) method based on our extended ResNet-50 is presented. It can detect and classify whether face masks are missing, are worn correctly or incorrectly, or the face is covered by other means (e.g., a hand or hair). Our DL method utilizes transfer learning with pretrained ResNet-50 weights to reduce training time and increase detection accuracy. Training and validation are achieved using the MaskedFace-Net, MAsked FAces (MAFA), and CelebA datasets. The trained model has been incorporated onto a socially assistive robot for robust and autonomous detection by a robot using lower-resolution images from the onboard camera. The results show a classification accuracy of 84.13% for the classification of no mask, correctly masked, and incorrectly masked faces in various real-world poses and occlusion scenarios using the robot.
]]>Computers doi: 10.3390/computers13010006
Authors: Josiah E. Balota Ah-Lian Kor Olatunji A. Shobande
The domain of Multi-Network Latency Prediction for IoT and Wireless Sensor Networks (WSNs) confronts significant challenges. However, continuous research efforts and progress in areas such as machine learning, edge computing, security technologies, and hybrid modelling are actively influencing the closure of identified gaps. Effectively addressing the inherent complexities in this field will play a crucial role in unlocking the full potential of latency prediction systems within the dynamic and diverse landscape of the Internet of Things (IoT). Using linear interpolation and extrapolation algorithms, the study explores the use of multi-network real-time end-to-end latency data for precise prediction. This approach has significantly improved network performance through throughput and response time optimization. The findings indicate prediction accuracy, with the majority of experimental connection pairs achieving over 95% accuracy, and within a 70% to 95% accuracy range. This research provides tangible evidence that data packet and end-to-end latency time predictions for heterogeneous low-rate and low-power WSNs, facilitated by a localized database, can substantially enhance network performance, and minimize latency. Our proposed JosNet model simplifies and streamlines WSN prediction by employing linear interpolation and extrapolation techniques. The research findings also underscore the potential of this approach to revolutionize the management and control of data packets in WSNs, paving the way for more efficient and responsive wireless sensor networks.
]]>Computers doi: 10.3390/computers13010005
Authors: Alessio Faccia Julie McDonald Babu George
Transparency in financial reporting is crucial for maintaining trust in financial markets, yet fraudulent financial statements remain challenging to detect and prevent. This study introduces a novel approach to detecting financial statement fraud by applying sentiment analysis to analyse the textual data within financial reports. This research aims to identify patterns and anomalies that might indicate fraudulent activities by examining the language and sentiment expressed across multiple fiscal years. The study focuses on three companies known for financial statement fraud: Wirecard, Tesco, and Under Armour. Utilising Natural Language Processing (NLP) techniques, the research analyses polarity (positive or negative sentiment) and subjectivity (degree of personal opinion) within the financial statements, revealing intriguing patterns. Wirecard showed a consistent tone with a slight decrease in 2018, Tesco exhibited marked changes in the fraud year, and Under Armour presented subtler shifts during the fraud years. While the findings present promising trends, the study emphasises that sentiment analysis alone cannot definitively detect financial statement fraud. It provides insights into the tone and mood of the text but cannot reveal intentional deception or financial discrepancies. The results serve as supplementary information, enriching traditional financial analysis methods. This research contributes to the field by exploring the potential of sentiment analysis in financial fraud detection, offering a unique perspective that complements quantitative methods. It opens new avenues for investigation and underscores the need for an integrated, multidimensional approach to fraud detection.
]]>Computers doi: 10.3390/computers13010004
Authors: Majed Imad Antoine Grenier Xiaolong Zhang Jari Nurmi Elena Simona Lohan
Low Earth Orbit (LEO) constellations have ecently gained tremendous attention in the navigational field due to their arger constellation size, faster geometry variations, and higher signal power evels than Global Navigation Satellite Systems (GNSS), making them favourable for Position, Navigation, and Timing (PNT) purposes. Satellite signals are heavily attenuated from the atmospheric ayers, especially from the ionosphere. Ionospheric delays are, however, expected to be smaller in signals from LEO satellites than GNSS due to their ower orbital altitudes and higher carrier frequency. Nevertheless, unlike for GNSS, there are currently no standardized models for correcting the ionospheric errors in LEO signals. In this paper, we derive a new model called Interpolated and Averaged Memory Model (IAMM) starting from existing International GNSS Service (IGS) data and based on the observation that ionospheric effects epeat every 11 years. Our IAMM model can be used for ionospheric corrections for signals from any satellite constellation, including LEO. This model is constructed based on averaging multiple ionospheric data and eflecting the electron content inside the ionosphere. The IAMM model’s primary advantage is its ability to be used both online and offline without needing eal-time input parameters, thus making it easy to store in a device’s memory. We compare this model with two benchmark models, the Klobuchar and International Reference Ionosphere (IRI) models, by utilizing GNSS measurement data from 24 scenarios acquired in several European countries using both professional GNSS eceivers and Android smartphones. The model’s behaviour is also evaluated on LEO signals using simulated data (as measurement data based on LEO signals are still not available in the open-access community; we show a significant eduction in ionospheric delays in LEO signals compared to GNSS. Finally, we highlight the remaining open challenges toward viable ionospheric-delay models in an LEO-PNT context.
]]>Computers doi: 10.3390/computers13010003
Authors: Alexey Nosov Yulia Kuznetsova Maksim Stankevich Ivan Smirnov Oleg Grigoriev
Social media has become an almost unlimited resource for studying social processes. Seasonality is a phenomenon that significantly affects many physical and mental states. Modeling collective emotional seasonal changes is a challenging task for the technical, social, and humanities sciences. This is due to the laboriousness and complexity of obtaining a sufficient amount of data, processing and evaluating them, and presenting the results. At the same time, understanding the annual dynamics of collective sentiment provides us with important insights into collective behavior, especially in various crises or disasters. In our study, we propose a scheme for identifying and evaluating signs of the seasonal rise and fall of emotional tension based on social media texts. The analysis is based on Russian-language comments in VKontakte social network communities devoted to city news and the events of a small town in the Nizhny Novgorod region, Russia. Workflow steps include a statistical method for categorizing data, exploratory analysis to identify common patterns, data aggregation for modeling seasonal changes, the identification of typical data properties through clustering, and the formulation and validation of seasonality criteria. As a result of seasonality modeling, it is shown that the calendar seasonal model corresponds to the data, and the dynamics of emotional tension correlate with the seasons. The proposed methodology is useful for a wide range of social practice issues, such as monitoring public opinion or assessing irregular shifts in mass emotions.
]]>Computers doi: 10.3390/computers13010002
Authors: Christos Stavrogiannis Filippos Sofos Maria Sagri Denis Vavougios Theodoros E. Karakasidis
Data science and machine learning (ML) techniques are employed to shed light into the molecular mechanisms that affect fluid-transport properties at the nanoscale. Viscosity and thermal conductivity values of four basic monoatomic elements, namely, argon, krypton, nitrogen, and oxygen, are gathered from experimental and simulation data in the literature and constitute a primary database for further investigation. The data refers to a wide pressure–temperature (P-T) phase space, covering fluid states from gas to liquid and supercritical. The database is enriched with new simulation data extracted from our equilibrium molecular dynamics (MD) simulations. A machine learning (ML) framework with ensemble, classical, kernel-based, and stacked algorithmic techniques is also constructed to function in parallel with the MD model, trained by existing data and predicting the values of new phase space points. In terms of algorithmic performance, it is shown that the stacked and tree-based ML models have given the most accurate results for all elements and can be excellent choices for small to medium-sized datasets. In such a way, a twofold computational scheme is constructed, functioning as a computationally inexpensive route that achieves high accuracy, aiming to replace costly experiments and simulations, when feasible.
]]>Computers doi: 10.3390/computers13010001
Authors: Konstantinos Zioutos Haridimos Kondylakis Kostas Stefanidis
Nowadays, in the pursuit of personalized health and well-being, dietary choices are critical. This paper introduces a novel recommendation system designed to provide users with personalized meal plans, consisting of breakfast, lunch, snack, and dinner, in alignment with their health history and preferences from other similar users. More specifically, our system exploits collaborative filtering first to identify other users with similar dietary preferences and uses this information to propose suitable recipes to individuals. The whole process is enhanced by analyzing the individual’s health history, including dietary restrictions, nutritional needs, and specific diet plans, such as low-carb or vegetarian. This ensures that the generated meal plans are not only aligned with the user’s taste but also contribute to the overall wellness of the user. A distinctive feature of our system is its dynamic adaptation feature, which enables users to make real-time adjustments to their meal plans based on their personal constraints and preferences, directly impacting future recommendations. We evaluate the usability of the system through a series of experiments on a large real-world data set of recipes, showing that our system is able to provide highly personalized, dynamic, and accurate recommendations.
]]>Computers doi: 10.3390/computers12120263
Authors: Nelson Cárdenas-Bolaño Aura Polo Carlos Robles-Algarín
This paper presents the implementation of an intelligent real-time single-channel electromyography (EMG) signal classifier based on open-source hardware. The article shows the experimental design, analysis, and implementation of a solution to identify four muscle movements from the forearm (extension, pronation, supination, and flexion), for future applications in transradial active prostheses. An EMG signal acquisition instrument was developed, with a 20–450 Hz bandwidth and 2 kHz sampling rate. The signals were stored in a Database, as a multidimensional array, using a desktop application. Numerical and graphic analysis approaches for discriminative capacity were proposed for feature analysis and four feature sets were used to feed the classifier. Artificial Neural Networks (ANN) were implemented for time-domain EMG pattern recognition (PR). The system obtained a classification accuracy of 98.44% and response times per signal of 8.522 ms. Results suggest these methods allow us to understand, intuitively, the behavior of user information.
]]>Computers doi: 10.3390/computers12120262
Authors: Abdullah Ali Jawad Al-Abadi Mbarka Belhaj Mohamed Ahmed Fakhfakh
In recent years, the combination of wireless body sensor networks (WBSNs) and the Internet ofc Medical Things (IoMT) marked a transformative era in healthcare technology. This combination allowed for the smooth communication between medical devices that enabled the real-time monitoring of patient’s vital signs and health parameters. However, the increased connectivity also introduced security challenges, particularly as they related to the presence of attack nodes. This paper proposed a unique solution, an enhanced random forest classifier with a K-means clustering (ERF-KMC) algorithm, in response to these challenges. The proposed ERF-KMC algorithm combined the accuracy of the enhanced random forest classifier for achieving the best execution time (ERF-ABE) with the clustering capabilities of K-means. This model played a dual role. Initially, the security in IoMT networks was enhanced through the detection of attack messages using ERF-ABE, followed by the classification of attack types, specifically distinguishing between man-in-the-middle (MITM) and distributed denial of service (DDoS) using K-means. This approach facilitated the precise categorization of attacks, enabling the ERF-KMC algorithm to employ appropriate methods for blocking these attack messages effectively. Subsequently, this approach contributed to the improvement of network performance metrics that significantly deteriorated during the attack, including the packet loss rate (PLR), end-to-end delay (E2ED), and throughput. This was achieved through the detection of attack nodes and the subsequent prevention of their entry into the IoMT networks, thereby mitigating potential disruptions and enhancing the overall network efficiency. This study conducted simulations using the Python programming language to assess the performance of the ERF-KMC algorithm in the realm of IoMT, specifically focusing on network performance metrics. In comparison with other algorithms, the ERF-KMC algorithm demonstrated superior efficacy, showcasing its heightened capability in terms of optimizing IoMT network performance as compared to other common algorithms in network security, such as AdaBoost, CatBoost, and random forest. The importance of the ERF-KMC algorithm lies in its security for IoMT networks, as it provides a high-security approach for identifying and preventing MITM and DDoS attacks. Furthermore, improving the network performance metrics to ensure transmitted medical data are accurate and efficient is vital for real-time patient monitoring. This study takes the next step towards enhancing the reliability and security of IoMT systems and advancing the future of connected healthcare technologies.
]]>Computers doi: 10.3390/computers12120261
Authors: Vangelis Sarlis George Papageorgiou Christos Tjortjis
Injuries are an unfortunate part of professional sports. This study aims to explore the multi-dimensional impact of injuries in professional basketball, focusing on player performance, team dynamics, and economic outcomes. Employing advanced machine learning and text mining techniques on suitably preprocessed NBA data, we examined the intricate interplay between injury and performance metrics. Our findings reveal that specific anatomical sub-areas, notably knees, ankles, and thighs, are crucial for athletic performance and injury prevention. The analysis revealed the significant economic burden that certain injuries impose on teams, necessitating comprehensive long-term strategies for injury management. The results provide valuable insights into the distribution of injuries and their varied effects, which are essential for developing effective prevention and economic strategies in basketball. By illuminating how injuries influence performance and recovery dynamics, this research offers comprehensive insights that are beneficial for NBA teams, healthcare professionals, medical staff, and trainers, paving the way for enhanced player care and optimized performance strategies.
]]>Computers doi: 10.3390/computers12120260
Authors: Arnab Biswas Kiki Adhinugraha David Taniar
With urban areas facing rapid population growth, public transport plays a key role to provide efficient and economic accessibility to the residents. It reduces the use of personal vehicles leading to reduced traffic congestion on roads and reduced pollution. To assess the performance of these transport systems, prior studies have taken into consideration the blank spot areas, population density, and stop access density; however, very little research has been performed to compare the accessibility between cities using a GIS-based approach. This paper compares the access and performance of public transport across Melbourne and Sydney, two cities with a similar size, population, and economy. The methodology uses spatial PostGIS queries to focus on accessibility-based approach for each residential mesh block and aggregates the blank spots, and the number of services offered by time of day and the frequency of services at the local government area (LGA) level. The results of the study reveal an interesting trend: that with increase in distance of LGA from city centre, the blank spot percentage increases while the frequency of services and stops offering weekend/night services declines. The results conclude that while Sydney exhibits a lower percentage of blank spots and has better coverage, performance in terms of accessibility by service time and frequency is better for Melbourne’s LGAs, even as the distance increases from the city centre.
]]>Computers doi: 10.3390/computers12120259
Authors: Yongseok Lee Jonghee Youn Kevin Nam Hyunyoung Oh Yunheung Paek
This paper focuses on enhancing the performance of the Nth-degree truncated-polynomial ring units key encapsulation mechanism (NTRU-KEM) algorithm, which ensures post-quantum resistance in the field of key establishment cryptography. The NTRU-KEM, while robust, suffers from increased storage and computational demands compared to classical cryptography, leading to significant memory and performance overheads. In environments with limited resources, the negative impacts of these overheads are more noticeable, leading researchers to investigate ways to speed up processes while also ensuring they are efficient in terms of area utilization. To address this, our research carefully examines the detailed functions of the NTRU-KEM algorithm, adopting a software/hardware co-design approach. This approach allows for customized computation, adapting to the varying requirements of operational timings and iterations. The key contribution is the development of a novel hardware acceleration technique focused on optimizing bus utilization. This technique enables parallel processing of multiple sub-functions, enhancing the overall efficiency of the system. Furthermore, we introduce a unique integrated register array that significantly reduces the spatial footprint of the design by merging multiple registers within the accelerator. In experiments conducted, the results of our work were found to be remarkable, with a time-area efficiency achieved that surpasses previous work by an average of 25.37 times. This achievement underscores the effectiveness of our optimization in accelerating the NTRU-KEM algorithm.
]]>Computers doi: 10.3390/computers12120258
Authors: Sunghae Jun
In big data analysis, various zero-inflated problems are occurring. In particular, the problem of inflated zeros has a great influence on text big data analysis. In general, the preprocessed data from text documents are a matrix consisting of the documents and terms for row and column, respectively. Each element of this matrix is an occurred frequency of term in a document. Most elements of the matrix are zeros, because the number of columns is much larger than the rows. This problem is a cause of decreasing model performance in text data analysis. To overcome this problem, we propose a method of zero-inflated text data analysis using generative adversarial networks (GAN) and statistical modeling. In this paper, we solve the zero-inflated problem using synthetic data generated from the original data with zero inflation. The main finding of our study is how to change zero values to the very small numeric values with random noise through the GAN. The generator and discriminator of the GAN learned the zero-inflated text data together and built a model that generates synthetic data that can replace the zero-inflated data. We conducted experiments and showed the results, using real and simulation data sets to verify the improved performance of our proposed method. In our experiments, we used five quantitative measures, prediction sum of squares, R-squared, log-likelihood, Akaike information criterion and Bayesian information criterion to evaluate the model’s performance between original and synthetic data sets. We found that all performances of our proposed method are better than the traditional methods.
]]>Computers doi: 10.3390/computers12120257
Authors: Felix Kahmann Fabian Honecker Julian Dreyer Marten Fischer Ralf Tönjes
Since the introduction of the first cryptocurrency, Bitcoin, in 2008, the gain in popularity of distributed ledger technologies (DLTs) has led to an increasing demand and, consequently, a larger number of network participants in general. Scaling blockchain-based solutions to cope with several thousand transactions per second or with a growing number of nodes has always been a desirable goal for most developers. Enabling these performance metrics can lead to further acceptance of DLTs and even faster systems in general. With the introduction of directed acyclic graphs (DAGs) as the underlying data structure to store the transactions within the distributed ledger, major performance gains have been achieved. In this article, we review the most prominent directed acyclic graph platforms and evaluate their key performance indicators in terms of transaction throughput and network latency. The evaluation aims to show whether the theoretically improved scalability of DAGs also applies in practice. For this, we set up multiple test networks for each DAG and blockchain framework and conducted broad performance measurements to have a mutual basis for comparison between the different solutions. Using the transactions per second numbers of each technology, we created a side-by-side evaluation that allows for a direct scalability estimation of the systems. Our findings support the fact that, due to their internal, more parallelly oriented data structure, DAG-based solutions offer significantly higher transaction throughput in comparison to blockchain-based platforms. Although, due to their relatively early maturity state, fully DAG-based platforms need to further evolve in their feature set to reach the same level of programmability and spread as modern blockchain platforms. With our findings at hand, developers of modern digital storage systems are able to reasonably determine whether to use a DAG-based distributed ledger technology solution in their production environment, i.e., replacing a database system with a DAG platform. Furthermore, we provide two real-world application scenarios, one being smart grid communication and the other originating from trusted supply chain management, that benefit from the introduction of DAG-based technologies.
]]>Computers doi: 10.3390/computers12120256
Authors: Maoli Wang Yu Sun Hongtao Sun Bowen Zhang
The Industrial Internet of Things (IIoT), where numerous smart devices associated with sensors, actuators, computers, and people communicate with shared networks, has gained advantages in many fields, such as smart manufacturing, intelligent transportation, and smart grids. However, security is becoming increasingly challenging due to the vulnerability of the IIoT to various malicious attacks. In this paper, the security issues of the IIoT are reviewed from the following three aspects: (1) security threats and their attack mechanisms are presented to illustrate the vulnerability of the IIoT; (2) the intrusion detection methods are listed from the attack identification perspectives; and (3) some defense strategies are comprehensively summarized. Several concluding remarks and promising future directions are provided at the end of this paper.
]]>Computers doi: 10.3390/computers12120255
Authors: Paria Sarzaeim Qusay H. Mahmoud Akramul Azim Gary Bauer Ian Bowles
Smart policing refers to the use of advanced technologies such as artificial intelligence to enhance policing activities in terms of crime prevention or crime reduction. Artificial intelligence tools, including machine learning and natural language processing, have widespread applications across various fields, such as healthcare, business, and law enforcement. By means of these technologies, smart policing enables organizations to efficiently process and analyze large volumes of data. Some examples of smart policing applications are fingerprint detection, DNA matching, CCTV surveillance, and crime prediction. While artificial intelligence offers the potential to reduce human errors and biases, it is still essential to acknowledge that the algorithms reflect the data on which they are trained, which are inherently collected by human inputs. Considering the critical role of the police in ensuring public safety, the adoption of these algorithms demands careful and thoughtful implementation. This paper presents a systematic literature review focused on exploring the machine learning techniques employed by law enforcement agencies. It aims to shed light on the benefits and limitations of utilizing these techniques in smart policing and provide insights into the effectiveness and challenges associated with the integration of machine learning in law enforcement practices.
]]>Computers doi: 10.3390/computers12120254
Authors: Baasanjargal Erdenebat Bayarjargal Bud Temuulen Batsuren Tamás Kozsik
DevOps methodology and tools, which provide standardized ways for continuous integration (CI) and continuous deployment (CD), are invaluable for efficient software development. Current DevOps solutions, however, lack a useful functionality: they do not support simultaneous project developments and deployment on the same operating infrastructure (e.g., a cluster of Docker containers). In this paper, we propose a novel approach to address this shortcoming by defining a multi-project, multi-environment (MPME) approach. With this approach, a large company can organize many microservice-based projects operating simultaneously on a common code base, using self-hosted Kubernetes clusters, which helps developers and businesses to better focus on the product they are developing, and to reduce efforts on the management of their DevOps infrastructure.
]]>Computers doi: 10.3390/computers12120253
Authors: Thimo F. Schindler Simon Schlicht Klaus-Dieter Thoben
Within the integration and development of data-driven process models, the underlying process is digitally mapped in a model through sensory data acquisition and subsequent modelling. In this process, challenges of different types and degrees of severity arise in each modelling step, according to the Cross-Industry Standard Process for Data Mining (CRISP-DM). Particularly in the context of data acquisition and integration into the process model, it can be assumed with a sufficiently high degree of probability that the acquired data contain anomalies of various kinds. The outliers must be detected in the data preparation and processing phase and dealt with accordingly. If this is sufficiently implemented, it will positively impact the subsequent modelling in terms of accuracy and precision. Therefore, this paper shows how outliers can be identified using the unsupervised machine learning methods autoencoder, Density-Based Spatial Clustering of Applications with Noise (DBSCAN), Isolation Forest (iForest), and One-Class Support Vector Machine (OCSVM). Following implementing these methods, we compared them by applying the Numenta Anomaly Benchmark (NAB) and sufficiently presented the individual strengths and disadvantages. Evaluating the correctness, distinctiveness and robustness criteria described in the paper showed that the One-Class Support Vector Machine was outstanding among the methods considered. This is because the OCSVM achieved acceptable anomaly detections on the available process datasets with comparatively little effort.
]]>Computers doi: 10.3390/computers12120252
Authors: Shiyuan Cai Yuchen Cai Liu Liu Haitao Han Feng Bao
Due to increased traffic pressure, traditional urban rail vehicle–ground communication systems are no longer able to meet the increasing communication requirements. In this paper, ad hoc networks are applied to urban rail transit vehicle–ground communication systems to improve link reliability and reduce transmission delay. In the proposed network, a service-driven routing algorithm is proposed, which considers the distance factor for cluster head selection and optimizes the routing transmission delay by service priority and congestion level. An auxiliary node-based routing maintenance mechanism is also proposed to avoid the problem of frequent breakage of communication links due to the high-speed movement of trains. Through the simulation, the proposed algorithm can effectively reduce the packet loss rate, end-to-end delay, and routing overhead of vehicle–ground communication compared with the traditional routing algorithm, which is more conducive to meeting the next generation of urban rail transit vehicle–ground communication requirements.
]]>Computers doi: 10.3390/computers12120251
Authors: Martin Wynn Kerstin Felser
As digitalisation sweeps through industries, companies are having to deal with the resultant changes in business models, core processes and organisational structures. This includes the reassessment of the role of the IT department, traditionally the guardians of technology standards and providers of corporate systems and infrastructure, and their ongoing maintenance. This article investigates this dynamic in two research studies. Study 1 focuses on the German automotive industry and adopts a qualitative inductive approach based on interviews with IT practitioners to ascertain the key aspects of digitalisation impacting the industry and to chart the emergence of a new model for the management of IT. Study 2 then reviews the deployment of digital technologies in other industry sectors via questionnaire responses from senior IT professionals in eight organisations. The results suggest that the transfer of IT roles and responsibilities to business functions, evident in the German automotive industry, is being replicated in other organisations in which digital technologies are now embedded in an organisation’s products or services. This article concludes with a model for cross-referencing the role of the IT function with the impact of digital technologies, representing a contribution to the growing literature on digital technology deployment in organisations.
]]>Computers doi: 10.3390/computers12120250
Authors: Rohit Mittal Geeta Rani Vibhakar Pathak Sonam Chhikara Vijaypal Singh Dhaka Eugenio Vocaturo Ester Zumpano
The automation industry faces the challenge of avoiding interference with obstacles, estimating the next move of a robot, and optimizing its path in various environments. Although researchers have predicted the next move of a robot in linear and non-linear environments, there is a lack of precise estimation of sectorial error probability while moving a robot on a curvy path. Additionally, existing approaches use visual sensors, incur high costs for robot design, and ineffective in achieving motion stability on various surfaces. To address these issues, the authors in this manuscript propose a low-cost and multisensory robot capable of moving on an optimized path in diverse environments with eight degrees of freedom. The authors use the extended Kalman filter and unscented Kalman filter for localization and position estimation of the robot. They also compare the sectorial path prediction error at different angles from 0° to 180° and demonstrate the mathematical modeling of various operations involved in navigating the robot. The minimum deviation of 1.125 cm between the actual and predicted path proves the effectiveness of the robot in a real-life environment.
]]>Computers doi: 10.3390/computers12120249
Authors: Broderick Crawford Felipe Cisternas-Caneo Katherine Sepúlveda Ricardo Soto Álex Paz Alvaro Peña Claudio León de la Barra Eduardo Rodriguez-Tello Gino Astorga Carlos Castro Franklin Johnson Giovanni Giachetti
The digitization of information and technological advancements have enabled us to gather vast amounts of data from various domains, including but not limited to medicine, commerce, and mining. Machine learning techniques use this information to improve decision-making, but they have a big problem: they are very sensitive to data variation, so it is necessary to clean them to remove irrelevant and redundant information. This removal of information is known as the Feature Selection Problem. This work presents the Pendulum Search Algorithm applied to solve the Feature Selection Problem. As the Pendulum Search Algorithm is a metaheuristic designed for continuous optimization problems, a binarization process is performed using the Two-Step Technique. Preliminary results indicate that our proposal obtains competitive results when compared to other metaheuristics extracted from the literature, solving well-known benchmarks.
]]>Computers doi: 10.3390/computers12120248
Authors: Eren Duman Mehmet S. Aktas Ezgi Yahsi
In today’s financial landscape, traditional banking institutions rely extensively on customers’ historical financial data to evaluate their eligibility for loan approvals. While these decision support systems offer predictive accuracy for established customers, they overlook a crucial demographic: individuals without a financial history. To address this gap, our study presents a methodology for a decision support system that is intended to assist in determining credit risk. Rather than solely focusing on past financial records, our methodology assesses customer credibility by generating credit risk scores derived from psychometric test results. Utilizing machine learning algorithms, we model customer credibility through multidimensional metrics such as character traits and attitudes toward money management. Preliminary results from our prototype testing indicate that this innovative approach holds promise for accurate risk assessment.
]]>Computers doi: 10.3390/computers12120247
Authors: Midya Alqaradaghi Muhammad Zafar Iqbal Nazir Tamás Kozsik
Static analysis is a software testing technique that analyzes the code without executing it. It is widely used to detect vulnerabilities, errors, and other issues during software development. Many tools are available for static analysis of Java code, including SpotBugs. Methods that perform a security check must be declared private or final; otherwise, they can be compromised when a malicious subclass overrides the methods and omits the checks. In Java, security checks can be performed using the SecurityManager class. This paper addresses the aforementioned problem by building a new automated checker that raises an issue when this rule is violated. The checker is built under the SpotBugs static analysis tool. We evaluated our approach on both custom test cases and real-world software, and the results revealed that the checker successfully detected related bugs in both with optimal metrics values.
]]>Computers doi: 10.3390/computers12120246
Authors: Josep-Lluis Ferrer-Gomila M. Francisca Hinarejos
In this article, we present the first proposal for contract signing based on blockchain that meets the requirements of fairness, hard-timeliness, and bc-optimism. The proposal, thanks to the use of blockchain, does not require the use of trusted third parties (TTPs), thus avoiding a point of failure and the problem of signatories having to agree on a TTP that is trusted by both. The presented protocol is fair because it is designed such that no honest signatory can be placed at a disadvantage. It meets the hard-timeliness requirement because both signatories can end the execution of the protocol at any time they wish. Finally, the proposal is bc-optimistic because blockchain functions are only executed in case of exception (and not in each execution of the protocol), with consequent savings when working with public blockchains. No previous proposal simultaneously met these three requirements. In addition to the above, this article clarifies the concept of timeliness, which previously has been defined in a confusing way (starting with the authors who used the term for the first time). We conducted a security review that allowed us to verify that our proposal meets the desired requirements. Furthermore, we provide the specifications of a smart contract designed for the Ethereum blockchain family and verified the economic feasibility of the proposal, ensuring it can be aligned with the financial requirements of different scenarios.
]]>Computers doi: 10.3390/computers12120245
Authors: Surasit Songma Theera Sathuphan Thanakorn Pamutha
This article examines intrusion detection systems in depth using the CSE-CIC-IDS-2018 dataset. The investigation is divided into three stages: to begin, data cleaning, exploratory data analysis, and data normalization procedures (min-max and Z-score) are used to prepare data for use with various classifiers; second, in order to improve processing speed and reduce model complexity, a combination of principal component analysis (PCA) and random forest (RF) is used to reduce non-significant features by comparing them to the full dataset; finally, machine learning methods (XGBoost, CART, DT, KNN, MLP, RF, LR, and Bayes) are applied to specific features and preprocessing procedures, with the XGBoost, DT, and RF models outperforming the others in terms of both ROC values and CPU runtime. The evaluation concludes with the discovery of an optimal set, which includes PCA and RF feature selection.
]]>Computers doi: 10.3390/computers12120244
Authors: Pau Fonseca i Casas Iza Romanowska Joan Garcia i Subirana
Specification and Description Language (SDL) is a language that can represent the behavior and structure of a model completely and unambiguously. It allows the creation of frameworks that can run a model without the need to code it in a specific programming language. This automatic process simplifies the key phases of model building: validation and verification. SDLPS is a simulator that enables the definition and execution of models using SDL. In this paper, we present a new library that enables the execution of SDL models defined on SDLPS infrastructure on a HPC platform, such as a supercomputer, thus significantly speeding up simulation runtime. Moreover, we apply the SDL language to a social science use case, thus opening a new avenue for facilitating the use of HPC power to new groups of users. The tools presented here have the potential to increase the robustness of modeling software by improving the documentation, verification, and validation of the models.
]]>