Next Issue
Volume 12, October
Previous Issue
Volume 12, August
 
 

Computers, Volume 12, Issue 9 (September 2023) – 21 articles

Cover Story (view full-size image): We developed a novel Hierarchical VPLS (H-VPLS) architecture via Q-in-Q tunneling on a commodity router. Our work is based on utilizing and enhancing two well-known open-source packages: Vector Packet Processing (VPP) as the router’s fast data plane and FRRouting (FRR), a modular control plane protocol suite, to implement VPLS. Both VPP and FRR have active and dynamic communities, and they are the only open-source frameworks that support VPLS. FRR in the control plane implements control messages and relevant signaling, while VPP in the data-plane side provides the capability of forwarding VPLS packets and manually labeling them. View this paper
  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Reader to open them.
Order results
Result details
Section
Select all
Export citation of selected articles as:
20 pages, 2593 KiB  
Article
Evaluating Video Games as Tools for Education on Fake News and Misinformation
by Ruth S. Contreras-Espinosa and Jose Luis Eguia-Gomez
Computers 2023, 12(9), 188; https://doi.org/10.3390/computers12090188 - 21 Sep 2023
Viewed by 2051
Abstract
Despite access to reliable information being essential for equal opportunities in our society, current school curricula only include some notions about media literacy in a limited context. Thus, it is necessary to create scenarios for reflection on and a well-founded analysis of misinformation. [...] Read more.
Despite access to reliable information being essential for equal opportunities in our society, current school curricula only include some notions about media literacy in a limited context. Thus, it is necessary to create scenarios for reflection on and a well-founded analysis of misinformation. Video games may be an effective approach to foster these skills and can seamlessly integrate learning content into their design, enabling achieving multiple learning outcomes and building competencies that can transfer to real-life situations. We analyzed 24 video games about media literacy by studying their content, design, and characteristics that may affect their implementation in learning settings. Even though not all learning outcomes considered were equally addressed, the results show that media literacy video games currently on the market could be used as effective tools to achieve critical learning goals and may allow users to understand, practice, and implement skills to fight misinformation, regardless of their complexity in terms of game mechanics. However, we detected that certain characteristics of video games may affect their implementation in learning environments, such as their availability, estimated playing time, approach, or whether they include real or fictional worlds, variables that should be further considered by both developers and educators. Full article
(This article belongs to the Special Issue Game-Based Learning, Gamification in Education and Serious Games 2023)
Show Figures

Figure 1

17 pages, 2268 KiB  
Article
Addressing Uncertainty in Tool Wear Prediction with Dropout-Based Neural Network
by Arup Dey, Nita Yodo, Om P. Yadav, Ragavanantham Shanmugam and Monsuru Ramoni
Computers 2023, 12(9), 187; https://doi.org/10.3390/computers12090187 - 19 Sep 2023
Viewed by 1150
Abstract
Data-driven algorithms have been widely applied in predicting tool wear because of the high prediction performance of the algorithms, availability of data sets, and advancements in computing capabilities in recent years. Although most algorithms are supposed to generate outcomes with high precision and [...] Read more.
Data-driven algorithms have been widely applied in predicting tool wear because of the high prediction performance of the algorithms, availability of data sets, and advancements in computing capabilities in recent years. Although most algorithms are supposed to generate outcomes with high precision and accuracy, this is not always true in practice. Uncertainty exists in distinct phases of applying data-driven algorithms due to noises and randomness in data, the presence of redundant and irrelevant features, and model assumptions. Uncertainty due to noise and missing data is known as data uncertainty. On the other hand, model assumptions and imperfection are reasons for model uncertainty. In this paper, both types of uncertainty are considered in the tool wear prediction. Empirical mode decomposition is applied to reduce uncertainty from raw data. Additionally, the Monte Carlo dropout technique is used in training a neural network algorithm to incorporate model uncertainty. The unique feature of the proposed method is that it estimates tool wear as an interval, and the interval range represents the degree of uncertainty. Different performance measurement matrices are used to compare the proposed method. It is shown that the proposed approach can predict tool wear with higher accuracy. Full article
(This article belongs to the Special Issue Uncertainty-Aware Artificial Intelligence)
Show Figures

Figure 1

14 pages, 959 KiB  
Article
Video Summarization Based on Feature Fusion and Data Augmentation
by Theodoros Psallidas and Evaggelos Spyrou
Computers 2023, 12(9), 186; https://doi.org/10.3390/computers12090186 - 15 Sep 2023
Cited by 2 | Viewed by 1282
Abstract
During the last few years, several technological advances have led to an increase in the creation and consumption of audiovisual multimedia content. Users are overexposed to videos via several social media or video sharing websites and mobile phone applications. For efficient browsing, searching, [...] Read more.
During the last few years, several technological advances have led to an increase in the creation and consumption of audiovisual multimedia content. Users are overexposed to videos via several social media or video sharing websites and mobile phone applications. For efficient browsing, searching, and navigation across several multimedia collections and repositories, e.g., for finding videos that are relevant to a particular topic or interest, this ever-increasing content should be efficiently described by informative yet concise content representations. A common solution to this problem is the construction of a brief summary of a video, which could be presented to the user, instead of the full video, so that she/he could then decide whether to watch or ignore the whole video. Such summaries are ideally more expressive than other alternatives, such as brief textual descriptions or keywords. In this work, the video summarization problem is approached as a supervised classification task, which relies on feature fusion of audio and visual data. Specifically, the goal of this work is to generate dynamic video summaries, i.e., compositions of parts of the original video, which include its most essential video segments, while preserving the original temporal sequence. This work relies on annotated datasets on a per-frame basis, wherein parts of videos are annotated as being “informative” or “noninformative”, with the latter being excluded from the produced summary. The novelties of the proposed approach are, (a) prior to classification, a transfer learning strategy to use deep features from pretrained models is employed. These models have been used as input to the classifiers, making them more intuitive and robust to objectiveness, and (b) the training dataset was augmented by using other publicly available datasets. The proposed approach is evaluated using three datasets of user-generated videos, and it is demonstrated that deep features and data augmentation are able to improve the accuracy of video summaries based on human annotations. Moreover, it is domain independent, could be used on any video, and could be extended to rely on richer feature representations or include other data modalities. Full article
Show Figures

Figure 1

52 pages, 1408 KiB  
Article
Specification Mining over Temporal Data
by Giacomo Bergami, Samuel Appleby and Graham Morgan
Computers 2023, 12(9), 185; https://doi.org/10.3390/computers12090185 - 14 Sep 2023
Cited by 1 | Viewed by 993
Abstract
Current specification mining algorithms for temporal data rely on exhaustive search approaches, which become detrimental in real data settings where a plethora of distinct temporal behaviours are recorded over prolonged observations. This paper proposes a novel algorithm, Bolt2, based on a refined heuristic [...] Read more.
Current specification mining algorithms for temporal data rely on exhaustive search approaches, which become detrimental in real data settings where a plethora of distinct temporal behaviours are recorded over prolonged observations. This paper proposes a novel algorithm, Bolt2, based on a refined heuristic search of our previous algorithm, Bolt. Our experiments show that the proposed approach not only surpasses exhaustive search methods in terms of running time but also guarantees a minimal description that captures the overall temporal behaviour. This is achieved through a hypothesis lattice search that exploits support metrics. Our novel specification mining algorithm also outperforms the results achieved in our previous contribution. Full article
(This article belongs to the Special Issue Advances in Database Engineered Applications 2023)
Show Figures

Figure 1

17 pages, 3062 KiB  
Article
Process-Oriented Requirements Definition and Analysis of Software Components in Critical Systems
by Benedetto Intrigila, Giuseppe Della Penna, Andrea D’Ambrogio, Dario Campagna and Malina Grigore
Computers 2023, 12(9), 184; https://doi.org/10.3390/computers12090184 - 14 Sep 2023
Viewed by 1136
Abstract
Requirements management is a key aspect in the development of software components, since complex systems are often subject to frequent updates due to continuously changing requirements. This is especially true in critical systems, i.e., systems whose failure or malfunctioning may lead to severe [...] Read more.
Requirements management is a key aspect in the development of software components, since complex systems are often subject to frequent updates due to continuously changing requirements. This is especially true in critical systems, i.e., systems whose failure or malfunctioning may lead to severe consequences. This paper proposes a three-step approach that incrementally refines a critical system specification, from a lightweight high-level model targeted to stakeholders, down to a formal standard model that links requirements, processes and data. The resulting model provides the requirements specification used to feed the subsequent development, verification and maintenance activities, and can also be seen as a first step towards the development of a digital twin of the physical system. Full article
(This article belongs to the Special Issue Recent Advances in Digital Twins and Cognitive Twins)
Show Figures

Figure 1

14 pages, 748 KiB  
Article
Enhancing Counterfeit Detection with Multi-Features on Secure 2D Grayscale Codes
by Bimo Sunarfri Hantono, Syukron Abu Ishaq Alfarozi, Azkario Rizky Pratama, Ahmad Ataka Awwalur Rizqi, I Wayan Mustika, Mardhani Riasetiawan and Anna Maria Sri Asih
Computers 2023, 12(9), 183; https://doi.org/10.3390/computers12090183 - 14 Sep 2023
Cited by 1 | Viewed by 1210
Abstract
Counterfeit products have become a pervasive problem in the global marketplace, necessitating effective strategies to protect both consumers and brands. This study examines the role of cybersecurity in addressing counterfeiting issues, specifically focusing on a multi-level grayscale watermark-based authentication system. The system comprises [...] Read more.
Counterfeit products have become a pervasive problem in the global marketplace, necessitating effective strategies to protect both consumers and brands. This study examines the role of cybersecurity in addressing counterfeiting issues, specifically focusing on a multi-level grayscale watermark-based authentication system. The system comprises a generator responsible for creating a secure 2D code, and an authenticator designed to extract watermark information and verify product authenticity. To authenticate the secure 2D code, we propose various features, including the analysis of the spatial domain, frequency domain, and grayscale watermark distribution. Furthermore, we emphasize the importance of selecting appropriate interpolation methods to enhance counterfeit detection. Our proposed approach demonstrates remarkable performance, achieving precision, recall, and specificities surpassing 84.8%, 83.33%, and 84.5%, respectively, across different datasets. Full article
(This article belongs to the Special Issue Using New Technologies on Cyber Security Solutions)
Show Figures

Figure 1

15 pages, 3402 KiB  
Article
Multispectral Image Generation from RGB Based on WSL Color Representation: Wavelength, Saturation, and Lightness
by Vaclav Skala
Computers 2023, 12(9), 182; https://doi.org/10.3390/computers12090182 - 13 Sep 2023
Viewed by 1079
Abstract
Image processing techniques are based nearly exclusively on RGB (red–green–blue) representation, which is significantly influenced by technological issues. The RGB triplet represents a mixture of the wavelength, saturation, and lightness values of light. It leads to unexpected chromaticity artifacts in processing. Therefore, processing [...] Read more.
Image processing techniques are based nearly exclusively on RGB (red–green–blue) representation, which is significantly influenced by technological issues. The RGB triplet represents a mixture of the wavelength, saturation, and lightness values of light. It leads to unexpected chromaticity artifacts in processing. Therefore, processing based on the wavelength, saturation, and lightness should be more resistant to the introduction of color artifacts. The proposed process of converting RGB values to corresponding wavelengths is not straightforward. In this contribution, a novel simple and accurate method for extracting the wavelength, saturation, and lightness of a color represented by an RGB triplet is described. The conversion relies on the known RGB values of the rainbow spectrum and accommodates variations in color saturation. Full article
Show Figures

Figure 1

19 pages, 592 KiB  
Article
Building an Expert System through Machine Learning for Predicting the Quality of a Website Based on Its Completion
by Vishnu Priya Biyyapu, Sastry Kodanda Rama Jammalamadaka, Sasi Bhanu Jammalamadaka, Bhupati Chokara, Bala Krishna Kamesh Duvvuri and Raja Rao Budaraju
Computers 2023, 12(9), 181; https://doi.org/10.3390/computers12090181 - 11 Sep 2023
Viewed by 1041
Abstract
The main channel for disseminating information is now the Internet. Users have different expectations for the calibre of websites regarding the posted and presented content. The website’s quality is influenced by up to 120 factors, each represented by two to fifteen attributes. A [...] Read more.
The main channel for disseminating information is now the Internet. Users have different expectations for the calibre of websites regarding the posted and presented content. The website’s quality is influenced by up to 120 factors, each represented by two to fifteen attributes. A major challenge is quantifying the features and evaluating the quality of a website based on the feature counts. One of the aspects that determines a website’s quality is its completeness, which focuses on the existence of all the objects and their connections with one another. It is not easy to build an expert model based on feature counts to evaluate website quality, so this paper has focused on that challenge. Both a methodology for calculating a website’s quality and a parser-based approach for measuring feature counts are offered. We provide a multi-layer perceptron model that is an expert model for forecasting website quality from the “completeness” perspective. The accuracy of the predictions is 98%, whilst the accuracy of the nearest model is 87%. Full article
Show Figures

Figure 1

25 pages, 2070 KiB  
Article
Developing a Novel Hierarchical VPLS Architecture Using Q-in-Q Tunneling in Router and Switch Design
by Morteza Biabani, Nasser Yazdani and Hossein Fotouhi
Computers 2023, 12(9), 180; https://doi.org/10.3390/computers12090180 - 07 Sep 2023
Cited by 1 | Viewed by 1262
Abstract
Virtual Private LAN Services (VPLS) is an ethernet-based Virtual Private Network (VPN) service that provides multipoint-to-multipoint Layer 2 VPN service, where each site is geographically dispersed across a Wide Area Network (WAN). The adaptability and scalability of VPLS are limited despite the fact [...] Read more.
Virtual Private LAN Services (VPLS) is an ethernet-based Virtual Private Network (VPN) service that provides multipoint-to-multipoint Layer 2 VPN service, where each site is geographically dispersed across a Wide Area Network (WAN). The adaptability and scalability of VPLS are limited despite the fact that they provide a flexible solution for connecting geographically dispersed sites. Furthermore, the construction of tunnels connecting customer locations that are separated by great distances adds a substantial amount of latency to the user traffic transportation. To address these issues, a novel Hierarchical VPLS (H-VPLS) architecture has been developed using 802.1Q tunneling (also known as Q-in-Q) on high-speed and commodity routers to satisfy the additional requirements of new VPLS applications. The Vector Packet Processing (VPP) performs as the router’s data plane, and FRRouting (FRR), an open-source network routing software suite, acts as the router’s control plane. The router is designed to seamlessly forward VPLS packets using the Request For Comments (RFCs) 4762, 4446, 4447, 4448, and 4385 from The Internet Engineering Task Force (IETF) integrated with VPP. In addition, the Label Distribution Protocol (LDP) is used for Multi-Protocol Label Switching (MPLS) Pseudo-Wire (PW) signaling in FRR. The proposed mechanism has been implemented on a software-based router in the Linux environment and tested for its functionality, signaling, and control plane processes. The router is also implemented on commodity hardware for testing the functionality of VPLS in the real world. Finally, the analysis of the results verifies the efficiency of the proposed mechanism in terms of throughput, latency, and packet loss ratio. Full article
(This article belongs to the Special Issue Advances in High-Performance Switching and Routing)
Show Figures

Figure 1

22 pages, 6318 KiB  
Article
Optimized Downlink Scheduling over LTE Network Based on Artificial Neural Network
by Falah Y. H. Ahmed, Amal Abulgasim Masli, Bashar Khassawneh, Jabar H. Yousif and Dilovan Asaad Zebari
Computers 2023, 12(9), 179; https://doi.org/10.3390/computers12090179 - 07 Sep 2023
Viewed by 1014
Abstract
Long-Term Evolution (LTE) technology is utilized efficiently for wireless broadband communication for mobile devices. It provides flexible bandwidth and frequency with high speed and peak data rates. Optimizing resource allocation is vital for improving the performance of the Long-Term Evolution (LTE) system and [...] Read more.
Long-Term Evolution (LTE) technology is utilized efficiently for wireless broadband communication for mobile devices. It provides flexible bandwidth and frequency with high speed and peak data rates. Optimizing resource allocation is vital for improving the performance of the Long-Term Evolution (LTE) system and meeting the user’s quality of service (QoS) needs. The resource distribution in video streaming affects the LTE network performance, reducing network fairness and causing increased delay and lower data throughput. This study proposes a novel approach utilizing an artificial neural network (ANN) based on normalized radial basis function NN (RBFNN) and generalized regression NN (GRNN) techniques. The 3rd Generation Partnership Project (3GPP) is proposed to derive accurate and reliable data output using the LTE downlink scheduling algorithms. The performance of the proposed methods is compared based on their packet loss rate, throughput, delay, spectrum efficiency, and fairness factors. The results of the proposed algorithm significantly improve the efficiency of real-time streaming compared to the LTE-DL algorithms. These improvements are also shown in the form of lower computational complexity. Full article
Show Figures

Figure 1

17 pages, 2961 KiB  
Article
Opinion Formation in Online Public Debates Structured in Information Cascades: A System-Theoretic Viewpoint
by Ivan V. Kozitsin
Computers 2023, 12(9), 178; https://doi.org/10.3390/computers12090178 - 07 Sep 2023
Cited by 1 | Viewed by 918
Abstract
Online information cascades (tree-like structures formed by posts, comments, likes, replies, etc.) constitute the spine of the public online information environment, reflecting its various trends, evolving with it and, importantly, affecting its development. While users participate in online discussions, they display their views [...] Read more.
Online information cascades (tree-like structures formed by posts, comments, likes, replies, etc.) constitute the spine of the public online information environment, reflecting its various trends, evolving with it and, importantly, affecting its development. While users participate in online discussions, they display their views and thus contribute to the growth of cascades. At the same time, users’ opinions are influenced by cascades’ elements. The current paper aims to advance our knowledge regarding these social processes by developing an agent-based model in which agents participate in a discussion around a post on the Internet. Agents display their opinions by writing comments on the post and liking them (i.e., leaving positive assessments). The result of these processes is dual: on the one hand, agents develop an information cascade; on the other hand, they update their views. Our purpose is to understand how agents’ activity, openness to influence, and cognitive constraints (that condition the amount of information individuals are able to proceed with) affect opinion dynamics in a three-party society. More precisely, we are interested in what opinion will dominate in the long run and how this is moderated by the aforementioned factors, the social contagion effect (when people’ perception of a message may depend not only on the message’s opinion, but also on how other individuals perceive this object, with more positive evaluations increasing the probability of adoption), and ranking algorithms that steer the order in which agents learn new messages. Among other things, we demonstrated that replies to disagreeable opinions are extremely effective for promoting your own position. In contrast, various forms of like activity have a tiny effect on this issue. Full article
Show Figures

Figure 1

23 pages, 2321 KiB  
Article
Effect of Digital Game-Based Learning on Student Engagement and Motivation
by Muhammad Nadeem, Melinda Oroszlanyova and Wael Farag
Computers 2023, 12(9), 177; https://doi.org/10.3390/computers12090177 - 06 Sep 2023
Cited by 1 | Viewed by 14733
Abstract
Currently, academia is grappling with a significant problem—a lack of engagement. Humankind has gone too far into exploring entertainment options, while the education system has not really kept up. Millennials love playing games, and this addiction can be used to engage and motivate [...] Read more.
Currently, academia is grappling with a significant problem—a lack of engagement. Humankind has gone too far into exploring entertainment options, while the education system has not really kept up. Millennials love playing games, and this addiction can be used to engage and motivate them in the learning process. This study examines the effect of digital game-based learning on student engagement and motivation levels and the gender differences in online learning settings. This study was conducted in two distinct phases. A game-based and traditional online quizzing tools were used to compare levels of engagement and motivation, as well as to assess the additional parameter of gender difference. During the first phase of the study, 276 male and female undergraduate students were recruited from Sophomore Seminar classes, and 101 participated in the survey, of which 83 were male and 18 were female. In the second phase, 126 participants were recruited, of which 107 (63 females and 44 males) participated in the anonymous feedback surveys. The results revealed that digital game-based learning has a more positive impact on student engagement and motivation compared to traditional online activities. The incorporation of a leaderboard as a gaming element in the study was found to positively impact the academic performance of certain students, but it could also demotivate some students. Furthermore, female students generally showed a slightly higher level of enjoyment toward the games compared to male students, but they did not prefer a comparison with other students as much as male students did. The favorable response from students toward digital game-based activities indicates that enhancing instruction with such activities will not only make learning an enjoyable experience for learners but also enhance their engagement. Full article
(This article belongs to the Special Issue Recent Advances in Computer-Assisted Learning)
Show Figures

Figure 1

16 pages, 1060 KiB  
Article
Attitudes towards Digital Educational Technologies Scale for University Students: Development and Validation
by Irina A. Novikova, Polina A. Bychkova, Dmitriy A. Shlyakhta and Alexey L. Novikov
Computers 2023, 12(9), 176; https://doi.org/10.3390/computers12090176 - 05 Sep 2023
Viewed by 1679
Abstract
Numerous studies of the digitalization of higher education show that university students’ attitudes toward digital educational technologies (DETs) are one of the important psychological factors that can hinder or facilitate the optimal implementation of digital technologies in education. International researchers have developed many [...] Read more.
Numerous studies of the digitalization of higher education show that university students’ attitudes toward digital educational technologies (DETs) are one of the important psychological factors that can hinder or facilitate the optimal implementation of digital technologies in education. International researchers have developed many tools for diagnosing the attitudes of university students toward various aspects of the digitalization of education; however, until recently, similar scales in Russian have not been developed, which determined the purpose of this present research. The proposed version of the Attitudes towards DETs Scale for University Students (ATDETS-US) includes the cognitive, emotional, and behavioral subscales corresponding components of the attitude according to the ACB Model. The validation sample included 317 (160 females and 157 males) bachelor and master students from different Russian universities. Psychometric testing using Cronbach’s Alpha and McDonald’s Omega coefficients, hierarchical factor analysis, and CFA confirm the high internal consistency, reliability of the ATDETS-US and its subscales, and the good fit of the model. ATDETS-US will be used for obtaining reliable data on the attitudes towards DETs in university students, which should be taken into account when designing programs for their psychological support in the educational process and developing their digital competence. Full article
Show Figures

Figure 1

22 pages, 15152 KiB  
Article
Novel Deep Feature Fusion Framework for Multi-Scenario Violence Detection
by Sabah Abdulazeez Jebur, Khalid A. Hussein, Haider Kadhim Hoomod and Laith Alzubaidi
Computers 2023, 12(9), 175; https://doi.org/10.3390/computers12090175 - 05 Sep 2023
Cited by 10 | Viewed by 1406
Abstract
Detecting violence in various scenarios is a difficult task that requires a high degree of generalisation. This includes fights in different environments such as schools, streets, and football stadiums. However, most current research on violence detection focuses on a single scenario, limiting its [...] Read more.
Detecting violence in various scenarios is a difficult task that requires a high degree of generalisation. This includes fights in different environments such as schools, streets, and football stadiums. However, most current research on violence detection focuses on a single scenario, limiting its ability to generalise across multiple scenarios. To tackle this issue, this paper offers a new multi-scenario violence detection framework that operates in two environments: fighting in various locations and rugby stadiums. This framework has three main steps. Firstly, it uses transfer learning by employing three pre-trained models from the ImageNet dataset: Xception, Inception, and InceptionResNet. This approach enhances generalisation and prevents overfitting, as these models have already learned valuable features from a large and diverse dataset. Secondly, the framework combines features extracted from the three models through feature fusion, which improves feature representation and enhances performance. Lastly, the concatenation step combines the features of the first violence scenario with the second scenario to train a machine learning classifier, enabling the classifier to generalise across both scenarios. This concatenation framework is highly flexible, as it can incorporate multiple violence scenarios without requiring training from scratch with additional scenarios. The Fusion model, which incorporates feature fusion from multiple models, obtained an accuracy of 97.66% on the RLVS dataset and 92.89% on the Hockey dataset. The Concatenation model accomplished an accuracy of 97.64% on the RLVS and 92.41% on the Hockey datasets with just a single classifier. This is the first framework that allows for the classification of multiple violent scenarios within a single classifier. Furthermore, this framework is not limited to violence detection and can be adapted to different tasks. Full article
(This article belongs to the Special Issue Uncertainty-Aware Artificial Intelligence)
Show Figures

Figure 1

18 pages, 1663 KiB  
Article
Automated Optimization-Based Deep Learning Models for Image Classification Tasks
by Daudi Mashauri Migayo, Shubi Kaijage, Stephen Swetala and Devotha G. Nyambo
Computers 2023, 12(9), 174; https://doi.org/10.3390/computers12090174 - 01 Sep 2023
Cited by 1 | Viewed by 1706
Abstract
Applying deep learning models requires design and optimization when solving multifaceted artificial intelligence tasks. Optimization relies on human expertise and is achieved only with great exertion. The current literature concentrates on automating design; optimization needs more attention. Similarly, most existing optimization libraries focus [...] Read more.
Applying deep learning models requires design and optimization when solving multifaceted artificial intelligence tasks. Optimization relies on human expertise and is achieved only with great exertion. The current literature concentrates on automating design; optimization needs more attention. Similarly, most existing optimization libraries focus on other machine learning tasks rather than image classification. For this reason, an automated optimization scheme of deep learning models for image classification tasks is proposed in this paper. A sequential-model-based optimization algorithm was used to implement the proposed method. Four deep learning models, a transformer-based model, and standard datasets for image classification challenges were employed in the experiments. Through empirical evaluations, this paper demonstrates that the proposed scheme improves the performance of deep learning models. Specifically, for a Virtual Geometry Group (VGG-16), accuracy was heightened from 0.937 to 0.983, signifying a 73% relative error rate drop within an hour of automated optimization. Similarly, training-related parameter values are proposed to improve the performance of deep learning models. The scheme can be extended to automate the optimization of transformer-based models. The insights from this study may assist efforts to provide full access to the building and optimization of DL models, even for amateurs. Full article
(This article belongs to the Topic Recent Trends in Image Processing and Pattern Recognition)
Show Figures

Figure 1

18 pages, 498 KiB  
Article
Torus-Connected Toroids: An Efficient Topology for Interconnection Networks
by Antoine Bossard
Computers 2023, 12(9), 173; https://doi.org/10.3390/computers12090173 - 29 Aug 2023
Viewed by 1158
Abstract
Recent supercomputers embody hundreds of thousands of compute nodes, and sometimes millions; as such, they are massively parallel systems. Node interconnection is thus critical to maximise the computing performance, and the torus topology has come out as a popular solution to this crucial [...] Read more.
Recent supercomputers embody hundreds of thousands of compute nodes, and sometimes millions; as such, they are massively parallel systems. Node interconnection is thus critical to maximise the computing performance, and the torus topology has come out as a popular solution to this crucial issue. This is the case, for example, for the interconnection network of the Fujitsu Fugaku, which was ranked world no. 1 until May 2022 and is the world no. 2 at the time of the writing of this article. Here, the number of dimensions used by the network topology of such torus-based interconnects stays rather low: it is equal to three for the Fujitsu Fugaku’s interconnect. As a result, it is necessary to greatly increase the arity of the underlying torus topology to be able to connect the numerous compute nodes involved, and this is eventually at the cost of a higher network diameter. Aiming at avoiding such a dramatic diameter rise, topologies can also combine several layers: such interconnects are called hierarchical interconnection networks (HIN). We propose, in this paper, which extends an earlier study, a novel interconnect topology for massively parallel systems, torus-connected toroids (TCT), whose advantage compared to existing topologies is that while it retains the torus topology for its desirable properties, the TCT network topology combines it with an additional layer, toroids, in order to significantly lower the network diameter. We both theoretically and empirically evaluate our proposal and quantitatively compare it to conventional approaches, which the TCT topology is shown to supersede. Full article
Show Figures

Figure 1

16 pages, 5133 KiB  
Article
A New Linear Model for the Calculation of Routing Metrics in 802.11s Using ns-3 and RStudio
by Juan Ochoa-Aldeán and Carlos Silva-Cárdenas
Computers 2023, 12(9), 172; https://doi.org/10.3390/computers12090172 - 28 Aug 2023
Viewed by 818
Abstract
Wireless mesh networks (WMNs) offer a pragmatic solution with a cost-effective ratio when provisioning ubiquitous broadband internet access and diverse telecommunication systems. The conceptual underpinning of mesh networks finds application not only in IEEE networks, but also in 3GPP networks like LTE and [...] Read more.
Wireless mesh networks (WMNs) offer a pragmatic solution with a cost-effective ratio when provisioning ubiquitous broadband internet access and diverse telecommunication systems. The conceptual underpinning of mesh networks finds application not only in IEEE networks, but also in 3GPP networks like LTE and the low-power wide area network (LPWAN) tailored for the burgeoning Internet of Things (IoT) landscape. IEEE 802.11s is well known for its facto standard for WMN, which defines the hybrid wireless mesh protocol (HWMP) as a layer-2 routing protocol and airtime link (ALM) as a metric. In this intricate landscape, artificial intelligence (AI) plays a prominent role in the industry, particularly within the technology and telecommunication realms. This study presents a novel methodology for the computation of routing metrics, specifically the ALM. This methodology implements the network simulator ns-3 and the RStudio as a statistical computing environment for data analysis. The former has enabled for the creation of scripts that elicit a variety of scenarios for WMN where information is gathered and stored in databases. The latter (RStudio) takes this information, and at this point, two linear predictions are supported. The first uses linear models (lm) and the second employs general linear models (glm). To conclude this process, statistical tests are applied to the original model, as well as to the new suggested ones. This work substantially contributes in two ways: first, through the methodological tool for the metric calculation of the HWMP protocol that belongs to the IEEE 802.11s standard, using lm and glm for the selection and validation of the model regressors. At this stage the ANOVA and STEPWIZE tools of RStudio are used. The second contribution is a linear predictor that improves the WMN’s performance as a priori mechanism before the use of the ns-3 simulator. The ANCOVA tool of RStudio is employed in the latter. Full article
(This article belongs to the Topic Artificial Intelligence Models, Tools and Applications)
Show Figures

Figure 1

17 pages, 429 KiB  
Article
Securing Financial Transactions with a Robust Algorithm: Preventing Double-Spending Attacks
by Hasan Hashim, Ahmad Reda Alzighaibi, Amaal Farag Elessawy, Ibrahim Gad, Hatem Abdul-Kader and Asmaa Elsaid
Computers 2023, 12(9), 171; https://doi.org/10.3390/computers12090171 - 28 Aug 2023
Cited by 1 | Viewed by 1146
Abstract
A zero-confirmation transaction is a transaction that has not yet been confirmed on the blockchain and is not yet part of the blockchain. The network propagates zero-confirmation transactions quickly, but they are not secured against double-spending attacks. In this study, the proposed method [...] Read more.
A zero-confirmation transaction is a transaction that has not yet been confirmed on the blockchain and is not yet part of the blockchain. The network propagates zero-confirmation transactions quickly, but they are not secured against double-spending attacks. In this study, the proposed method is used to secure zero-confirmation transactions by using the security hashing algorithm 512 in elliptic curve cryptography (ECDSA) instead of the security hashing algorithm 256. This is to generate a cryptographic identity to secure the transactions in zero-confirmation transactions instead of security hashing algorithm 256. The results show that SHA-512 is greater than SHA-256 in throughput. Additionally, SHA-512 offers better throughput performance than SHA-256 while also having a larger hash size. Results also show that SHA-512 is more secure than SHA-256. Full article
Show Figures

Figure 1

24 pages, 5346 KiB  
Article
Evaluating User Satisfaction Using Deep-Learning-Based Sentiment Analysis for Social Media Data in Saudi Arabia’s Telecommunication Sector
by Majed A. Alshamari
Computers 2023, 12(9), 170; https://doi.org/10.3390/computers12090170 - 26 Aug 2023
Cited by 1 | Viewed by 2076
Abstract
Social media has become common as a means to convey opinions and express the extent of satisfaction and dissatisfaction with a service or product. In the Kingdom of Saudi Arabia specifically, most social media users share positive and negative opinions about a service [...] Read more.
Social media has become common as a means to convey opinions and express the extent of satisfaction and dissatisfaction with a service or product. In the Kingdom of Saudi Arabia specifically, most social media users share positive and negative opinions about a service or product, especially regarding communication services, which is one of the most important services for citizens who use it to communicate with the world. This research aimed to analyse and measure user satisfaction with the services provided by the Saudi Telecom Company (STC), Mobily, and Zain. This type of sentiment analysis is an important measure and is used to make important business decisions to succeed in increasing customer loyalty and satisfaction. In this study, the authors developed advanced methods based on deep learning (DL) to analyse and reveal the percentage of customer satisfaction using the publicly available dataset AraCust. Several DL models have been utilised in this study, including long short-term memory (LSTM), gated recurrent unit (GRU), and BiLSTM, on the AraCust dataset. The LSTM model achieved the highest performance in text classification, demonstrating a 98.04% training accuracy and a 97.03% test score. The study addressed the biggest challenge that telecommunications companies face: that the company’s services influence customers’ decisions due to their dissatisfaction with the provided services. Full article
Show Figures

Figure 1

20 pages, 3604 KiB  
Article
Digital Competence of Higher Education Teachers at a Distance Learning University in Portugal
by José António Moreira, Catarina S. Nunes and Diogo Casanova
Computers 2023, 12(9), 169; https://doi.org/10.3390/computers12090169 - 24 Aug 2023
Cited by 2 | Viewed by 1097
Abstract
The Digital Education Action Plan (2021–2027) launched by the European Commission aims to revolutionize education systems, prioritizing the development of a robust digital education ecosystem and the enhancement of teachers’ digital transformation skills. This study focuses on Universidade Aberta, Portugal, to identify the [...] Read more.
The Digital Education Action Plan (2021–2027) launched by the European Commission aims to revolutionize education systems, prioritizing the development of a robust digital education ecosystem and the enhancement of teachers’ digital transformation skills. This study focuses on Universidade Aberta, Portugal, to identify the strengths and weaknesses of teachers’ digital skills within the Digital Competence Framework for Educators (DigCompEdu). Using a quantitative approach, the research utilized the DigCompEdu CheckIn self-assessment questionnaire, validated for the Portuguese population, to evaluate teachers’ perceptions of their digital competences. A total of 118 teachers participated in the assessment. Findings revealed that the teachers exhibited a notably high overall level of digital competence, positioned at the intersection of B2 (Expert) and C1 (Leader) on the DigCompEdu scale. However, specific areas for improvement were identified, particularly in Digital Technologies Resources and Assessment, the core pedagogical components of DigCompEdu, which displayed comparatively lower proficiency levels. To ensure continuous progress and alignment with the Digital Education Action Plan’s strategic priorities, targeted teacher training initiatives should focus on enhancing competences related to Digital Technologies Resources and Assessment. Full article
Show Figures

Figure 1

13 pages, 312 KiB  
Article
Kids Surfing the Web: A Comparative Study in Portugal
by Angélica Monteiro, Cláudia Sousa and Rita Barros
Computers 2023, 12(9), 168; https://doi.org/10.3390/computers12090168 - 23 Aug 2023
Cited by 1 | Viewed by 1286
Abstract
The conditions for safe Internet access and the development of skills enabling full participation in online environments are recognized in the Council of Europe’s strategy for child rights, from 2022. The guarantee of this right has implications for experiences inside and outside the [...] Read more.
The conditions for safe Internet access and the development of skills enabling full participation in online environments are recognized in the Council of Europe’s strategy for child rights, from 2022. The guarantee of this right has implications for experiences inside and outside the school context. Therefore, this study aims to compare the perceptions of students from different educational levels, who participated in a digital storytelling workshop, regarding online safety, searching habits, and digital competences. Data were collected through a questionnaire survey completed by 84 Portuguese students from elementary and secondary schools. A non-parametric multivariate analysis of variance was used to identify differences as children advanced across educational stages. The results revealed that secondary students tended to spend more time online and demonstrated more advanced search skills. Interestingly, the youngest children exhibited higher competences in creating games and practicing safety measures regarding online postings. These findings emphasize the importance of schools, in a joint action with the educational community, including parents, teachers and students, in developing a coordinated and vertically integrated approach to digital education that considers the children’s current knowledge, attitudes, and skills as a starting point for pedagogical intervention. Full article
Previous Issue
Next Issue
Back to TopTop