Next Issue
Volume 12, December
Previous Issue
Volume 12, October
 
 

Computers, Volume 12, Issue 11 (November 2023) – 24 articles

Cover Story (view full-size image): Artificial neural networks are parametric machine learning models that are widely used nowadays in a huge range of applications. Moreover, global optimization techniques such as particle swarm optimization or genetic algorithms can be used to effectively train them. However, in order for the optimization techniques to be effective, knowledge of the domain range of the network parameters is also required, which is not always available. Here, an innovative two-step method is proposed for efficient training of artificial neural networks. In the first stage, using grammatical evolution and partitioning and expansion rules, the definition field of the parameters is determined, and in the second stage, the network is trained within the field of definition identified in the first stage. View this paper
  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Reader to open them.
Order results
Result details
Section
Select all
Export citation of selected articles as:
22 pages, 1102 KiB  
Article
Smart Contract-Based Access Control Framework for Internet of Things Devices
by Md. Rahat Hasan, Ammar Alazab, Siddhartha Barman Joy, Mohammed Nasir Uddin, Md Ashraf Uddin, Ansam Khraisat, Iqbal Gondal, Wahida Ferdose Urmi and Md. Alamin Talukder
Computers 2023, 12(11), 240; https://doi.org/10.3390/computers12110240 - 20 Nov 2023
Cited by 1 | Viewed by 1978
Abstract
The Internet of Things (IoT) has recently attracted much interest from researchers due to its diverse IoT applications. However, IoT systems encounter additional security and privacy threats. Developing an efficient IoT system is challenging because of its sophisticated network topology. Effective access control [...] Read more.
The Internet of Things (IoT) has recently attracted much interest from researchers due to its diverse IoT applications. However, IoT systems encounter additional security and privacy threats. Developing an efficient IoT system is challenging because of its sophisticated network topology. Effective access control is required to ensure user privacy in the Internet of Things. Traditional access control methods are inappropriate for IoT systems because most conventional access control approaches are designed for centralized systems. This paper proposes a decentralized access control framework based on smart contracts with three parts: initialization, an access control protocol, and an inspection. Smart contracts are used in the proposed framework to store access control policies safely on the blockchain. The framework also penalizes users for attempting unauthorized access to the IoT resources. The smart contract was developed using Remix and deployed on the Ropsten Ethereum testnet. We analyze the performance of the smart contract-based access policies based on the gas consumption of blockchain transactions. Further, we analyze the system’s security, usability, scalability, and interoperability performance. Full article
(This article belongs to the Special Issue Software-Defined Internet of Everything)
Show Figures

Figure 1

21 pages, 4899 KiB  
Article
Pólya’s Methodology for Strengthening Problem-Solving Skills in Differential Equations: A Case Study in Colombia
by Marcos Chacón-Castro, Jorge Buele, Ana Dulcelina López-Rueda and Janio Jadán-Guerrero
Computers 2023, 12(11), 239; https://doi.org/10.3390/computers12110239 - 18 Nov 2023
Viewed by 2536
Abstract
The formation of students is integral to education. Strengthening critical thinking and reasoning are essential for the professionals that today’s world needs. For this reason, the authors of this article applied Pólya’s methodology, an initiative based on observing students’ difficulties when facing mathematical [...] Read more.
The formation of students is integral to education. Strengthening critical thinking and reasoning are essential for the professionals that today’s world needs. For this reason, the authors of this article applied Pólya’s methodology, an initiative based on observing students’ difficulties when facing mathematical problems. The present study is part of the qualitative and quantitative research paradigm and the action research methodology. In this study, the inquiry process was inductive, the sample is non-probabilistic, and the data interpretation strategy is descriptive. As a case study, six students were enrolled onto a differential equations course at the Universidad Autónoma de Bucaramanga. A didactic process was designed using information and communication technologies (ICTs) in five sequences that address first-order differential equation applications. As a result of the pedagogical intervention, problem-solving skills were strengthened. All this was based on asking the right questions, repeated reading, identifying and defining variables, mathematization, communication, and decomposing the problem into subproblems. This research study seeks to set a precedent in the Latin American region that will be the basis for future studies. Full article
(This article belongs to the Special Issue Present and Future of E-Learning Technologies)
Show Figures

Figure 1

23 pages, 1427 KiB  
Article
Ontology Development for Detecting Complex Events in Stream Processing: Use Case of Air Quality Monitoring
by Rose Yemson, Sohag Kabir, Dhavalkumar Thakker and Savas Konur
Computers 2023, 12(11), 238; https://doi.org/10.3390/computers12110238 - 16 Nov 2023
Viewed by 1361
Abstract
With the increasing amount of data collected by IoT devices, detecting complex events in real-time has become a challenging task. To overcome this challenge, we propose the utilisation of semantic web technologies to create ontologies that structure background knowledge about the complex event-processing [...] Read more.
With the increasing amount of data collected by IoT devices, detecting complex events in real-time has become a challenging task. To overcome this challenge, we propose the utilisation of semantic web technologies to create ontologies that structure background knowledge about the complex event-processing (CEP) framework in a way that machines can easily comprehend. Our ontology focuses on Indoor Air Quality (IAQ) data, asthma patients’ activities and symptoms, and how IAQ can be related to asthma symptoms and daily activities. Our goal is to detect complex events within the stream of events and accurately determine pollution levels and symptoms of asthma attacks based on daily activities. We conducted a thorough testing of our enhanced CEP framework with a real dataset, and the results indicate that it outperforms traditional CEP across various evaluation metrics such as accuracy, precision, recall, and F1-score. Full article
(This article belongs to the Special Issue When Blockchain Meets IoT: Challenges and Potentials)
Show Figures

Figure 1

29 pages, 4041 KiB  
Article
Mapping the Evolution of Cybernetics: A Bibliometric Perspective
by Bianca Cibu, Camelia Delcea, Adrian Domenteanu and Gabriel Dumitrescu
Computers 2023, 12(11), 237; https://doi.org/10.3390/computers12110237 - 16 Nov 2023
Cited by 9 | Viewed by 1692
Abstract
In this study, we undertake a comprehensive bibliometric analysis of the cybernetics research field. We compile a dataset of 4856 papers from the ISI Web of Science database spanning 1975–2022, employing keywords related to cybernetics. Our findings reveal an annual growth rate of [...] Read more.
In this study, we undertake a comprehensive bibliometric analysis of the cybernetics research field. We compile a dataset of 4856 papers from the ISI Web of Science database spanning 1975–2022, employing keywords related to cybernetics. Our findings reveal an annual growth rate of 7.56% in cybernetics research over this period, indicating sustained scholarly interest. By examining the annual progression of scientific production, we have identified three distinct periods characterized by significant disruptions in yearly publication trends. These disruptions have been thoroughly investigated within the paper, utilizing a longitudinal analysis of thematic evolution. We also identify emerging research trends through keyword analysis. Furthermore, we investigate collaborative networks among authors, their institutional affiliations, and global representation to elucidate the dissemination of cybernetics research. Employing n-gram analysis, we uncover diverse applications of cybernetics in fields such as computer science, information science, social sciences, sustainable development, supply chain, knowledge management, system dynamics, and medicine. The study contributes to enhancing the understanding of the evolving cybernetics landscape. Moreover, the conducted analysis underscores the versatile applicability across various academic and practical domains associated with the cybernetics field. Full article
Show Figures

Figure 1

20 pages, 5109 KiB  
Review
Design Recommendations for Immersive Virtual Reality Application for English Learning: A Systematic Review
by Jessica Rodrigues Esteves, Jorge C. S. Cardoso and Berenice Santos Gonçalves
Computers 2023, 12(11), 236; https://doi.org/10.3390/computers12110236 - 15 Nov 2023
Viewed by 1650
Abstract
The growing popularity of immersive virtual reality (iVR) technologies has opened up new possibilities for learning English. In the literature, it is possible to find several studies focused on the design, development, and evaluation of immersive virtual reality applications. However, there are no [...] Read more.
The growing popularity of immersive virtual reality (iVR) technologies has opened up new possibilities for learning English. In the literature, it is possible to find several studies focused on the design, development, and evaluation of immersive virtual reality applications. However, there are no studies that systematize design recommendations for immersive virtual reality applications for English learning. To fill this gap, we present a systematic review that aims to identify design recommendations for immersive virtual reality English learning applications. We searched the ACM Digital Library, ERIC, IEEE Xplore, Scopus, and Web of Science (1 January 2010 to April 2023) and found that 24 out of 847 articles met the inclusion criteria. We identified 18 categories of design considerations related to design and learning and a design process used to create iVR applications. We also identified existing trends related to universities, publications, devices, human senses, and development platforms. Finally, we addressed study limitations and future directions for designing iVR applications for English learning. Full article
(This article belongs to the Special Issue Recent Advances in Computer-Assisted Learning)
Show Figures

Figure 1

17 pages, 703 KiB  
Article
Enhancing Web Application Security through Automated Penetration Testing with Multiple Vulnerability Scanners
by Khaled Abdulghaffar, Nebrase Elmrabit and Mehdi Yousefi
Computers 2023, 12(11), 235; https://doi.org/10.3390/computers12110235 - 15 Nov 2023
Cited by 1 | Viewed by 3087
Abstract
Penetration testers have increasingly adopted multiple penetration testing scanners to ensure the robustness of web applications. However, a notable limitation of many scanning techniques is their susceptibility to producing false positives. This paper presents a novel framework designed to automate the operation of [...] Read more.
Penetration testers have increasingly adopted multiple penetration testing scanners to ensure the robustness of web applications. However, a notable limitation of many scanning techniques is their susceptibility to producing false positives. This paper presents a novel framework designed to automate the operation of multiple Web Application Vulnerability Scanners (WAVS) within a single platform. The framework generates a combined vulnerabilities report using two algorithms: an automation algorithm and a novel combination algorithm that produces comprehensive lists of detected vulnerabilities. The framework leverages the capabilities of two web vulnerability scanners, Arachni and OWASP ZAP. The study begins with an extensive review of the existing scientific literature, focusing on open-source WAVS and exploring the OWASP 2021 guidelines. Following this, the framework development phase addresses the challenge of varying results obtained from different WAVS. This framework’s core objective is to combine the results of multiple WAVS into a consolidated vulnerability report, ultimately improving detection rates and overall security. The study demonstrates that the combined outcomes produced by the proposed framework exhibit greater accuracy compared to individual scanning results obtained from Arachni and OWASP ZAP. In summary, the study reveals that the Union List outperforms individual scanners, particularly regarding recall and F-measure. Consequently, adopting multiple vulnerability scanners is recommended as an effective strategy to bolster vulnerability detection in web applications. Full article
(This article belongs to the Special Issue Using New Technologies on Cyber Security Solutions)
Show Figures

Figure 1

18 pages, 4553 KiB  
Article
Real-Time Network Video Data Streaming in Digital Medicine
by Miklos Vincze, Bela Molnar and Miklos Kozlovszky
Computers 2023, 12(11), 234; https://doi.org/10.3390/computers12110234 - 14 Nov 2023
Viewed by 1434
Abstract
Today, the use of digital medicine is becoming more and more common in medicine. With the use of digital medicine, health data can be shared, processed, and visualized using computer algorithms. One of the problems currently facing digital medicine is the rapid transmission [...] Read more.
Today, the use of digital medicine is becoming more and more common in medicine. With the use of digital medicine, health data can be shared, processed, and visualized using computer algorithms. One of the problems currently facing digital medicine is the rapid transmission of large amounts of data and their appropriate visualization, even in 3D. Advances in technology offer the possibility to use new image processing, networking, and visualization solutions for the evaluation of medical samples. Because of the resolution of the samples, it is not uncommon that it takes a long time for them to be analyzed, processed, and shared. This is no different for 3D visualization. In order to be able to display digitalized medical samples in 3D at high resolution, a computer with computing power that is not necessarily available to doctors and researchers is needed. COVID-19 has shown that everyday work must continue even when there is a physical distance between the participants. Real-time network streaming can provide a solution to this, by creating a 3D environment that can be shared between doctors/researchers in which the sample being examined can be visualized. In order for this 3D environment to be available to everyone, it must also be usable on devices that do not have high computing capacity. Our goal was to design a general-purpose solution that would allow users to visualize large amounts of medical imaging data in 3D, regardless of the computational capacity of the device they are using. With the solution presented in this paper, our goal was to create a 3D environment for physicians and researchers to collaboratively evaluate 3D medical samples in an interdisciplinary way. Full article
(This article belongs to the Special Issue Advances in High-Performance Switching and Routing)
Show Figures

Figure 1

10 pages, 373 KiB  
Article
PUFGuard: Vehicle-to-Everything Authentication Protocol for Secure Multihop Mobile Communication
by Fayez Gebali and Mohamed K. Elhadad
Computers 2023, 12(11), 233; https://doi.org/10.3390/computers12110233 - 14 Nov 2023
Viewed by 1365
Abstract
Vehicle area networks (VANs) encompass a spectrum of communication modes, including point-to-point visible light communication, 5G/6G cellular wireless communication, and Wi-Fi ad hoc multihop communication. The main focus of this paper is the introduction and application of physically unclonable functions (PUFs) as a [...] Read more.
Vehicle area networks (VANs) encompass a spectrum of communication modes, including point-to-point visible light communication, 5G/6G cellular wireless communication, and Wi-Fi ad hoc multihop communication. The main focus of this paper is the introduction and application of physically unclonable functions (PUFs) as a pivotal element in secure key generation, authentication processes, and trust metric definition for neighboring vehicles. The multifaceted protocols proposed herein encompass comprehensive security considerations, ranging from authentication and anonymity to the imperative aspects of the proof of presence, freshness, and ephemeral session key exchanges. This paper provides a systematic and comprehensive framework for enhancing security in VANs, which is of paramount importance in the context of modern smart transportation systems. The contributions of this work are multifarious and can be summarized as follows: (1) Presenting an innovative and robust approach to secure key generation based on PUFs, ensuring the dynamic nature of the authentication. (2) Defining trust metrics reliant on PUFs to ascertain the authenticity and integrity of proximate vehicles. (3) Using the proposed framework to enable seamless transitions between different communication protocols, such as the migration from 5G/6G to Wi-Fi, by introducing the concept of multimodal authentication, which accommodates a wide spectrum of vehicle capabilities. Furthermore, upholding privacy through the encryption and concealment of PUF responses safeguards the identity of vehicles during communication. Full article
(This article belongs to the Special Issue IoT: Security, Privacy and Best Practices 2024)
Show Figures

Figure 1

17 pages, 2738 KiB  
Article
Analyzing the Spread of Misinformation on Social Networks: A Process and Software Architecture for Detection and Analysis
by Zafer Duzen, Mirela Riveni and Mehmet S. Aktas
Computers 2023, 12(11), 232; https://doi.org/10.3390/computers12110232 - 14 Nov 2023
Viewed by 2988
Abstract
The rapid dissemination of misinformation on social networks, particularly during public health crises like the COVID-19 pandemic, has become a significant concern. This study investigates the spread of misinformation on social network data using social network analysis (SNA) metrics, and more generally by [...] Read more.
The rapid dissemination of misinformation on social networks, particularly during public health crises like the COVID-19 pandemic, has become a significant concern. This study investigates the spread of misinformation on social network data using social network analysis (SNA) metrics, and more generally by using well known network science metrics. Moreover, we propose a process design that utilizes social network data from Twitter, to analyze the involvement of non-trusted accounts in spreading misinformation supported by a proof-of-concept prototype. The proposed prototype includes modules for data collection, data preprocessing, network creation, centrality calculation, community detection, and misinformation spreading analysis. We conducted an experimental study on a COVID-19-related Twitter dataset using the modules. The results demonstrate the effectiveness of our approach and process steps, and provides valuable insight into the application of network science metrics on social network data for analysing various influence-parameters in misinformation spreading. Full article
Show Figures

Figure 1

15 pages, 1559 KiB  
Article
Utilizing an Attention-Based LSTM Model for Detecting Sarcasm and Irony in Social Media
by Deborah Olaniyan, Roseline Oluwaseun Ogundokun, Olorunfemi Paul Bernard, Julius Olaniyan, Rytis Maskeliūnas and Hakeem Babalola Akande
Computers 2023, 12(11), 231; https://doi.org/10.3390/computers12110231 - 14 Nov 2023
Viewed by 1769
Abstract
Sarcasm and irony represent intricate linguistic forms in social media communication, demanding nuanced comprehension of context and tone. In this study, we propose an advanced natural language processing methodology utilizing long short-term memory with an attention mechanism (LSTM-AM) to achieve an impressive accuracy [...] Read more.
Sarcasm and irony represent intricate linguistic forms in social media communication, demanding nuanced comprehension of context and tone. In this study, we propose an advanced natural language processing methodology utilizing long short-term memory with an attention mechanism (LSTM-AM) to achieve an impressive accuracy of 99.86% in detecting and interpreting sarcasm and irony within social media text. Our approach involves innovating novel deep learning models adept at capturing subtle cues, contextual dependencies, and sentiment shifts inherent in sarcastic or ironic statements. Furthermore, we explore the potential of transfer learning from extensive language models and integrating multimodal information, such as emojis and images, to heighten the precision of sarcasm and irony detection. Rigorous evaluation against benchmark datasets and real-world social media content showcases the efficacy of our proposed models. The outcomes of this research hold paramount significance, offering a substantial advancement in comprehending intricate language nuances in digital communication. These findings carry profound implications for sentiment analysis, opinion mining, and an enhanced understanding of social media dynamics. Full article
Show Figures

Figure 1

14 pages, 525 KiB  
Article
Moving towards a Mutant-Based Testing Tool for Verifying Behavior Maintenance in Test Code Refactorings
by Tiago Samuel Rodrigues Teixeira, Fábio Fagundes Silveira and Eduardo Martins Guerra
Computers 2023, 12(11), 230; https://doi.org/10.3390/computers12110230 - 13 Nov 2023
Viewed by 1269
Abstract
Evaluating mutation testing behavior can help decide whether refactoring successfully maintains the expected initial test results. Moreover, manually performing this analytical work is both time-consuming and prone to errors. This paper extends an approach to assess test code behavior and proposes a tool [...] Read more.
Evaluating mutation testing behavior can help decide whether refactoring successfully maintains the expected initial test results. Moreover, manually performing this analytical work is both time-consuming and prone to errors. This paper extends an approach to assess test code behavior and proposes a tool called MeteoR. This tool comprises an IDE plugin to detect issues that may arise during test code refactoring, reducing the effort required to perform evaluations. A preliminary assessment was conducted to validate the tool and ensure the proposed test code refactoring approach is adequate. By analyzing not only the mutation score but also the generated mutants in the pre- and post-refactoring process, results show that the approach is capable of checking whether the behavior of the mutants remains unchanged throughout the refactoring process. This proposal represents one more step toward the practice of test code refactoring. It can improve overall software quality, allowing developers and testers to safely refactor the test code in a scalable and automated way. Full article
Show Figures

Figure 1

12 pages, 1931 KiB  
Article
DephosNet: A Novel Transfer Learning Approach for Dephosphorylation Site Prediction
by Qing Yang, Xun Wang and Pan Zheng
Computers 2023, 12(11), 229; https://doi.org/10.3390/computers12110229 - 10 Nov 2023
Viewed by 1253
Abstract
Protein dephosphorylation is the process of removing phosphate groups from protein molecules, which plays a vital role in regulating various cellular processes and intricate protein signaling networks. The identification and prediction of dephosphorylation sites are crucial for this process. Previously, there was a [...] Read more.
Protein dephosphorylation is the process of removing phosphate groups from protein molecules, which plays a vital role in regulating various cellular processes and intricate protein signaling networks. The identification and prediction of dephosphorylation sites are crucial for this process. Previously, there was a lack of effective deep learning models for predicting these sites, often resulting in suboptimal outcomes. In this study, we introduce a deep learning framework known as “DephosNet”, which leverages transfer learning to enhance dephosphorylation site prediction. DephosNet employs dual-window sequential inputs that are embedded and subsequently processed through a series of network architectures, including ResBlock, Multi-Head Attention, and BiGRU layers. It generates predictions for both dephosphorylation and phosphorylation site probabilities. DephosNet is pre-trained on a phosphorylation dataset and then fine-tuned on the parameters with a dephosphorylation dataset. Notably, transfer learning significantly enhances DephosNet’s performance on the same dataset. Experimental results demonstrate that, when compared with other state-of-the-art models, DephosNet outperforms them on both the independent test sets for phosphorylation and dephosphorylation. Full article
Show Figures

Figure 1

13 pages, 2493 KiB  
Article
Authority Transfer According to a Driver Intervention Intention Considering Coexistence of Communication Delay
by Taeyoon Lim, Myeonghwan Hwang, Eugene Kim and Hyunrok Cha
Computers 2023, 12(11), 228; https://doi.org/10.3390/computers12110228 - 08 Nov 2023
Viewed by 1257
Abstract
Recently, interest and research on autonomous driving technology have been actively conducted. However, proving the safety of autonomous vehicles and commercializing autonomous vehicles remain key challenges. According to a report released by the California Department of Motor Vehicles on self-driving, it is still [...] Read more.
Recently, interest and research on autonomous driving technology have been actively conducted. However, proving the safety of autonomous vehicles and commercializing autonomous vehicles remain key challenges. According to a report released by the California Department of Motor Vehicles on self-driving, it is still hard to say that self-driving technology is highly reliable. Until fully autonomous driving is realized, authority transfer to humans is necessary to ensure the safety of autonomous driving. Several technologies, such as teleoperation and haptic-based approaches, are being developed based on human-machine interaction systems. This study deals with teleoperation and presents a way to switch control from autonomous vehicles to remote drivers. However, there are many studies on how to do teleoperation, but not many studies deal with communication delays that occur when switching control. Communication delays inevitably occur when switching control, and potential risks and accidents of the magnitude of the delay cannot be ignored. This study examines compensation for communication latency during remote control attempts and checks the acceptable level of latency for enabling remote operations. In addition, supplemented the safety and reliability of autonomous vehicles through research that reduces the size of communication delays when attempting teleoperation. It is expected to prevent human and material damage in the actual accident situation. Full article
(This article belongs to the Topic Electronic Communications, IOT and Big Data)
Show Figures

Figure 1

34 pages, 18856 KiB  
Article
Creating Location-Based Augmented Reality Games and Immersive Experiences for Touristic Destination Marketing and Education
by Alexandros Kleftodimos, Athanasios Evagelou, Stefanos Gkoutzios, Maria Matsiola, Michalis Vrigkas, Anastasia Yannacopoulou, Amalia Triantafillidou and Georgios Lappas
Computers 2023, 12(11), 227; https://doi.org/10.3390/computers12110227 - 07 Nov 2023
Cited by 3 | Viewed by 2436
Abstract
The aim of this paper is to present an approach that utilizes several mixed reality technologies for touristic promotion and education. More specifically, mixed reality applications and games were created to promote the mountainous areas of Western Macedonia, Greece, and to educate visitors [...] Read more.
The aim of this paper is to present an approach that utilizes several mixed reality technologies for touristic promotion and education. More specifically, mixed reality applications and games were created to promote the mountainous areas of Western Macedonia, Greece, and to educate visitors on various aspects of these destinations, such as their history and cultural heritage. Location-based augmented reality (AR) games were designed to guide the users to visit and explore the destinations, get informed, gather points and prizes by accomplishing specific tasks, and meet virtual characters that tell stories. Furthermore, an immersive lab was established to inform visitors about the region of interest through mixed reality content designed for entertainment and education. The lab visitors can experience content and games through virtual reality (VR) and augmented reality (AR) wearable devices. Likewise, 3D content can be viewed through special stereoscopic monitors. An evaluation of the lab experience was performed with a sample of 82 visitors who positively evaluated features of the immersive experience such as the level of satisfaction, immersion, educational usefulness, the intention to visit the mountainous destinations of Western Macedonia, intention to revisit the lab, and intention to recommend the experience to others. Full article
(This article belongs to the Special Issue Game-Based Learning, Gamification in Education and Serious Games 2023)
Show Figures

Figure 1

16 pages, 767 KiB  
Article
Constructing the Bounds for Neural Network Training Using Grammatical Evolution
by Ioannis G. Tsoulos, Alexandros Tzallas and Evangelos Karvounis
Computers 2023, 12(11), 226; https://doi.org/10.3390/computers12110226 - 05 Nov 2023
Viewed by 1886
Abstract
Artificial neural networks are widely established models of computational intelligence that have been tested for their effectiveness in a variety of real-world applications. These models require a set of parameters to be fitted through the use of an optimization technique. However, an issue [...] Read more.
Artificial neural networks are widely established models of computational intelligence that have been tested for their effectiveness in a variety of real-world applications. These models require a set of parameters to be fitted through the use of an optimization technique. However, an issue that researchers often face is finding an efficient range of values for the parameters of the artificial neural network. This paper proposes an innovative technique for generating a promising range of values for the parameters of the artificial neural network. Finding the value field is conducted by a series of rules for partitioning the original set of values or expanding it, the rules of which are generated using grammatical evolution. After finding a promising interval of values, any optimization technique such as a genetic algorithm can be used to train the artificial neural network on that interval of values. The new technique was tested on a wide range of problems from the relevant literature and the results were extremely promising. Full article
(This article belongs to the Special Issue Deep Learning and Explainable Artificial Intelligence)
Show Figures

Figure 1

17 pages, 2708 KiB  
Article
Chance-Constrained Optimization Formulation for Ship Conceptual Design: A Comparison of Metaheuristic Algorithms
by Jakub Kudela
Computers 2023, 12(11), 225; https://doi.org/10.3390/computers12110225 - 03 Nov 2023
Cited by 2 | Viewed by 1170
Abstract
This paper presents a new chance-constrained optimization (CCO) formulation for the bulk carrier conceptual design. The CCO problem is modeled through the scenario design approach. We conducted extensive numerical experiments comparing the convergence of both canonical and state-of-the-art metaheuristic algorithms on the original [...] Read more.
This paper presents a new chance-constrained optimization (CCO) formulation for the bulk carrier conceptual design. The CCO problem is modeled through the scenario design approach. We conducted extensive numerical experiments comparing the convergence of both canonical and state-of-the-art metaheuristic algorithms on the original and CCO formulations and showed that the CCO formulation is substantially more difficult to solve. The two best-performing methods were both found to be differential evolution-based algorithms. We then provide an analysis of the resulting solutions in terms of the dependence of the distribution functions of the unit transportation costs and annual cargo capacity of the ship design on the probability of violating the chance constraints. Full article
(This article belongs to the Special Issue Feature Papers in Computers 2023)
Show Figures

Figure 1

26 pages, 3805 KiB  
Article
A Methodological Framework for Designing Personalised Training Programs to Support Personnel Upskilling in Industry 5.0
by Francisco Fraile, Foivos Psarommatis, Faustino Alarcón and Jordi Joan
Computers 2023, 12(11), 224; https://doi.org/10.3390/computers12110224 - 02 Nov 2023
Viewed by 2149
Abstract
Industry 5.0 emphasises social sustainability and highlights the critical need for personnel upskilling and reskilling to achieve the seamless integration of human expertise and advanced technology. This paper presents a methodological framework for designing personalised training programs that support personnel upskilling, with the [...] Read more.
Industry 5.0 emphasises social sustainability and highlights the critical need for personnel upskilling and reskilling to achieve the seamless integration of human expertise and advanced technology. This paper presents a methodological framework for designing personalised training programs that support personnel upskilling, with the goal of fostering flexibility and resilience amid rapid changes in the industrial landscape. The proposed framework encompasses seven stages: (1) Integration with Existing Systems, (2) Data Collection, (3) Data Preparation, (4) Skills-Models Extraction, (5) Assessment of Skills and Qualifications, (6) Recommendations for Training Program, (7) Evaluation and Continuous Improvement. By leveraging Large Language Models (LLMs) and human-centric principles, our methodology enables the creation of tailored training programs to help organisations promote a culture of proactive learning. This work thus contributes to the sustainable development of the human workforce, facilitating access to high-quality training and fostering personnel well-being and satisfaction. Through a food-processing use case, this paper demonstrates how this methodology can help organisations identify skill gaps and upskilling opportunities and use these insights to drive personnel upskilling in Industry 5.0. Full article
Show Figures

Figure 1

20 pages, 1755 KiB  
Article
Optimization and Scalability of Educational Platforms: Integration of Artificial Intelligence and Cloud Computing
by Jaime Govea, Ernesto Ocampo Edye, Solange Revelo-Tapia and William Villegas-Ch
Computers 2023, 12(11), 223; https://doi.org/10.3390/computers12110223 - 01 Nov 2023
Viewed by 2111
Abstract
The intersection between technology and education has taken on unprecedented relevance, driven by the promise of transforming teaching and learning through advanced digital tools. This study proposes a comprehensive exploration of how cloud computing and artificial intelligence converge to impact education, focusing on [...] Read more.
The intersection between technology and education has taken on unprecedented relevance, driven by the promise of transforming teaching and learning through advanced digital tools. This study proposes a comprehensive exploration of how cloud computing and artificial intelligence converge to impact education, focusing on accessibility, efficiency, and quality of learning. A mixed-research design identified a 25% improvement in the personalization of educational content thanks to AI and a 60% increase in simultaneous user capacity through cloud computing. Additionally, a significant reduction in administrative errors and improvements in scalability were observed without sacrificing quality. The results demonstrate that these technologies not only improve efficiency and accessibility in education but also enrich the learning experience. By comparing these findings with previous research, this study highlights the synergistic value of these technologies and positions itself as a critical resource to guide future developments and improvements in the education sector in a digitally advanced world. Full article
(This article belongs to the Special Issue Recent Advances in Computer-Assisted Learning)
Show Figures

Figure 1

17 pages, 797 KiB  
Article
Distributed Representation for Assembly Code
by Kazuki Yoshida, Kaiyu Suzuki and Tomofumi Matsuzawa
Computers 2023, 12(11), 222; https://doi.org/10.3390/computers12110222 - 01 Nov 2023
Viewed by 1332
Abstract
In recent years, the number of similar software products with many common parts has been increasing due to the reuse and plagiarism of source code in the software development process. Pattern matching, which is an existing method for detecting similarity, cannot detect the [...] Read more.
In recent years, the number of similar software products with many common parts has been increasing due to the reuse and plagiarism of source code in the software development process. Pattern matching, which is an existing method for detecting similarity, cannot detect the similarities between these software products and other programs. It is necessary, for example, to detect similarities based on commonalities in both functionality and control structures. At the same time, detailed software analysis requires manual reverse engineering. Therefore, technologies that automatically identify similarities among the arge amounts of code present in software products in advance can reduce these oads. In this paper, we propose a representation earning model to extract feature expressions from assembly code obtained by statically analyzing such code to determine the similarity between software products. We use assembly code to eliminate the dependence on the existence of source code or differences in development anguage. The proposed approach makes use of Asm2Vec, an existing method, that is capable of generating a vector representation that captures the semantics of assembly code. The proposed method also incorporates information on the program control structure. The control structure can be represented by graph data. Thus, we use graph embedding, a graph vector representation method, to generate a representation vector that reflects both the semantics and the control structure of the assembly code. In our experiments, we generated expression vectors from multiple programs and used clustering to verify the accuracy of the approach in classifying similar programs into the same cluster. The proposed method outperforms existing methods that only consider semantics in both accuracy and execution time. Full article
(This article belongs to the Special Issue Best Practices, Challenges and Opportunities in Software Engineering)
Show Figures

Figure 1

31 pages, 4302 KiB  
Article
Investigation of the Gender-Specific Discourse about Online Learning during COVID-19 on Twitter Using Sentiment Analysis, Subjectivity Analysis, and Toxicity Analysis
by Nirmalya Thakur, Shuqi Cui, Karam Khanna, Victoria Knieling, Yuvraj Nihal Duggal and Mingchen Shao
Computers 2023, 12(11), 221; https://doi.org/10.3390/computers12110221 - 31 Oct 2023
Viewed by 1436
Abstract
This paper presents several novel findings from a comprehensive analysis of about 50,000 Tweets about online learning during COVID-19, posted on Twitter between 9 November 2021 and 13 July 2022. First, the results of sentiment analysis from VADER, Afinn, and TextBlob show that [...] Read more.
This paper presents several novel findings from a comprehensive analysis of about 50,000 Tweets about online learning during COVID-19, posted on Twitter between 9 November 2021 and 13 July 2022. First, the results of sentiment analysis from VADER, Afinn, and TextBlob show that a higher percentage of these Tweets were positive. The results of gender-specific sentiment analysis indicate that for positive Tweets, negative Tweets, and neutral Tweets, between males and females, males posted a higher percentage of the Tweets. Second, the results from subjectivity analysis show that the percentage of least opinionated, neutral opinionated, and highly opinionated Tweets were 56.568%, 30.898%, and 12.534%, respectively. The gender-specific results for subjectivity analysis indicate that females posted a higher percentage of highly opinionated Tweets as compared to males. However, males posted a higher percentage of least opinionated and neutral opinionated Tweets as compared to females. Third, toxicity detection was performed on the Tweets to detect different categories of toxic content—toxicity, obscene, identity attack, insult, threat, and sexually explicit. The gender-specific analysis of the percentage of Tweets posted by each gender for each of these categories of toxic content revealed several novel insights related to the degree, type, variations, and trends of toxic content posted by males and females related to online learning. Fourth, the average activity of males and females per month in this context was calculated. The findings indicate that the average activity of females was higher in all months as compared to males other than March 2022. Finally, country-specific tweeting patterns of males and females were also performed which presented multiple novel insights, for instance, in India, a higher percentage of the Tweets about online learning during COVID-19 were posted by males as compared to females. Full article
(This article belongs to the Special Issue Computational Modeling of Social Processes and Social Networks)
Show Figures

Figure 1

21 pages, 5377 KiB  
Article
Detecting Breast Tumors in Tomosynthesis Images Utilizing Deep Learning-Based Dynamic Ensemble Approach
by Loay Hassan, Adel Saleh, Vivek Kumar Singh, Domenec Puig and Mohamed Abdel-Nasser
Computers 2023, 12(11), 220; https://doi.org/10.3390/computers12110220 - 30 Oct 2023
Cited by 1 | Viewed by 1685
Abstract
Digital breast tomosynthesis (DBT) stands out as a highly robust screening technique capable of enhancing the rate at which breast cancer is detected. It also addresses certain limitations that are inherent to mammography. Nonetheless, the process of manually examining numerous DBT slices per [...] Read more.
Digital breast tomosynthesis (DBT) stands out as a highly robust screening technique capable of enhancing the rate at which breast cancer is detected. It also addresses certain limitations that are inherent to mammography. Nonetheless, the process of manually examining numerous DBT slices per case is notably time-intensive. To address this, computer-aided detection (CAD) systems based on deep learning have emerged, aiming to automatically identify breast tumors within DBT images. However, the current CAD systems are hindered by a variety of challenges. These challenges encompass the diversity observed in breast density, as well as the varied shapes, sizes, and locations of breast lesions. To counteract these limitations, we propose a novel method for detecting breast tumors within DBT images. This method relies on a potent dynamic ensemble technique, along with robust individual breast tumor detectors (IBTDs). The proposed dynamic ensemble technique utilizes a deep neural network to select the optimal IBTD for detecting breast tumors, based on the characteristics of the input DBT image. The developed individual breast tumor detectors hinge on resilient deep-learning architectures and inventive data augmentation methods. This study introduces two data augmentation strategies, namely channel replication and channel concatenation. These data augmentation methods are employed to surmount the scarcity of available data and to replicate diverse scenarios encompassing variations in breast density, as well as the shapes, sizes, and locations of breast lesions. This enhances the detection capabilities of each IBTD. The effectiveness of the proposed method is evaluated against two state-of-the-art ensemble techniques, namely non-maximum suppression (NMS) and weighted boxes fusion (WBF), finding that the proposed ensemble method achieves the best results with an F1-score of 84.96% when tested on a publicly accessible DBT dataset. When evaluated across different modalities such as breast mammography, the proposed method consistently attains superior tumor detection outcomes. Full article
(This article belongs to the Special Issue Artificial Intelligence in Control)
Show Figures

Figure 1

25 pages, 2224 KiB  
Article
A CNN-GRU Approach to the Accurate Prediction of Batteries’ Remaining Useful Life from Charging Profiles
by Sadiqa Jafari and Yung-Cheol Byun
Computers 2023, 12(11), 219; https://doi.org/10.3390/computers12110219 - 27 Oct 2023
Cited by 2 | Viewed by 1967
Abstract
Predicting the remaining useful life (RUL) is a pivotal step in ensuring the reliability of lithium-ion batteries (LIBs). In order to enhance the precision and stability of battery RUL prediction, this study introduces an innovative hybrid deep learning model that seamlessly integrates convolutional [...] Read more.
Predicting the remaining useful life (RUL) is a pivotal step in ensuring the reliability of lithium-ion batteries (LIBs). In order to enhance the precision and stability of battery RUL prediction, this study introduces an innovative hybrid deep learning model that seamlessly integrates convolutional neural network (CNN) and gated recurrent unit (GRU) architectures. Our primary goal is to significantly improve the accuracy of RUL predictions for LIBs. Our model excels in its predictive capabilities by skillfully extracting intricate features from a diverse array of data sources, including voltage (V), current (I), temperature (T), and capacity. Within this novel architectural design, parallel CNN layers are meticulously crafted to process each input feature individually. This approach enables the extraction of highly pertinent information from multi-channel charging profiles. We subjected our model to rigorous evaluations across three distinct scenarios to validate its effectiveness. When compared to LSTM, GRU, and CNN-LSTM models, our CNN-GRU model showcases a remarkable reduction in root mean square error, mean square error, mean absolute error, and mean absolute percentage error. These results affirm the superior predictive capabilities of our CNN-GRU model, which effectively harnesses the strengths of both CNNs and GRU networks to achieve superior prediction accuracy. This study draws upon NASA data to underscore the outstanding predictive performance of the CNN-GRU model in estimating the RUL of LIBs. Full article
Show Figures

Figure 1

34 pages, 11710 KiB  
Article
BigDaM: Efficient Big Data Management and Interoperability Middleware for Seaports as Critical Infrastructures
by Anastasios Nikolakopoulos, Matilde Julian Segui, Andreu Belsa Pellicer, Michalis Kefalogiannis, Christos-Antonios Gizelis, Achilleas Marinakis, Konstantinos Nestorakis and Theodora Varvarigou
Computers 2023, 12(11), 218; https://doi.org/10.3390/computers12110218 - 27 Oct 2023
Viewed by 1947
Abstract
Over the last few years, the European Union (EU) has placed significant emphasis on the interoperability of critical infrastructures (CIs). One of the main CI transportation infrastructures are ports. The control systems managing such infrastructures are constantly evolving and handle diverse sets of [...] Read more.
Over the last few years, the European Union (EU) has placed significant emphasis on the interoperability of critical infrastructures (CIs). One of the main CI transportation infrastructures are ports. The control systems managing such infrastructures are constantly evolving and handle diverse sets of people, data, and processes. Additionally, interdependencies among different infrastructures can lead to discrepancies in data models that propagate and intensify across interconnected systems. This article introduces “BigDaM”, a Big Data Management framework for critical infrastructures. It is a cutting-edge data model that adheres to the latest technological standards and aims to consolidate APIs and services within highly complex CI infrastructures. Our approach takes a bottom-up perspective, treating each service interconnection as an autonomous entity that must align with the proposed common vocabulary and data model. By injecting strict guidelines into the service/component development’s lifecycle, we explicitly promote interoperability among the services within critical infrastructure ecosystems. This approach facilitates the exchange and reuse of data from a shared repository among developers, small and medium-sized enterprises (SMEs), and large vendors. Business challenges have also been taken into account, in order to link the generated data assets of CIs with the business world. The complete framework has been tested in the main EU ports, part of the transportation sector of CIs. Performance evaluation and the aforementioned testing is also being analyzed, highlighting the capabilities of the proposed approach. Full article
Show Figures

Figure 1

18 pages, 2672 KiB  
Article
Enhancing Automated Scoring of Math Self-Explanation Quality Using LLM-Generated Datasets: A Semi-Supervised Approach
by Ryosuke Nakamoto, Brendan Flanagan, Taisei Yamauchi, Yiling Dai, Kyosuke Takami and Hiroaki Ogata
Computers 2023, 12(11), 217; https://doi.org/10.3390/computers12110217 - 24 Oct 2023
Cited by 1 | Viewed by 2180
Abstract
In the realm of mathematics education, self-explanation stands as a crucial learning mechanism, allowing learners to articulate their comprehension of intricate mathematical concepts and strategies. As digital learning platforms grow in prominence, there are mounting opportunities to collect and utilize mathematical self-explanations. However, [...] Read more.
In the realm of mathematics education, self-explanation stands as a crucial learning mechanism, allowing learners to articulate their comprehension of intricate mathematical concepts and strategies. As digital learning platforms grow in prominence, there are mounting opportunities to collect and utilize mathematical self-explanations. However, these opportunities are met with challenges in automated evaluation. Automatic scoring of mathematical self-explanations is crucial for preprocessing tasks, including the categorization of learner responses, identification of common misconceptions, and the creation of tailored feedback and model solutions. Nevertheless, this task is hindered by the dearth of ample sample sets. Our research introduces a semi-supervised technique using the large language model (LLM), specifically its Japanese variant, to enrich datasets for the automated scoring of mathematical self-explanations. We rigorously evaluated the quality of self-explanations across five datasets, ranging from human-evaluated originals to ones devoid of original content. Our results show that combining LLM-based explanations with mathematical material significantly improves the model’s accuracy. Interestingly, there is an optimal limit to how many synthetic self-explanation data can benefit the system. Exceeding this limit does not further improve outcomes. This study thus highlights the need for careful consideration when integrating synthetic data into solutions, especially within the mathematics discipline. Full article
(This article belongs to the Special Issue Recent Advances in Computer-Assisted Learning)
Show Figures

Figure 1

Previous Issue
Next Issue
Back to TopTop