Next Issue
Volume 10, December
Previous Issue
Volume 10, October
 
 

Computers, Volume 10, Issue 11 (November 2021) – 21 articles

Cover Story (view full-size image): Master lectures of history are usually quite boring for students. Virtual and augmented reality and serious games can solve this problem. This article presents a playful virtual reality experience set in Ancient Rome that reproduces the different buildings and civil constructions of the time as accurately as possible, making it possible for the player to create Roman cities in a simple way. Once they are built, the user can visit them, accessing the buildings and being able to interact with the objects and characters that appear. Moreover, in order to learn more information about every building, users can visualize them using augmented reality using marker-based techniques. View this paper
  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Reader to open them.
Order results
Result details
Section
Select all
Export citation of selected articles as:
13 pages, 1048 KiB  
Article
Teaching an Algorithm How to Catalog a Book
by Ernesto William De Luca, Francesca Fallucchi and Roberto Morelato
Computers 2021, 10(11), 155; https://doi.org/10.3390/computers10110155 - 18 Nov 2021
Cited by 1 | Viewed by 3204
Abstract
This paper presents a study of a strategy for automated cataloging within an OPAC or for online bibliographic catalogs generally. The aim of the analysis is to offer a set of results, while searching in library catalogs, that goes further than the expected [...] Read more.
This paper presents a study of a strategy for automated cataloging within an OPAC or for online bibliographic catalogs generally. The aim of the analysis is to offer a set of results, while searching in library catalogs, that goes further than the expected one-to-one term correspondence. The goal is to understand how ontological structures can affect query search results. This analysis can also be applied to search functions other than in the library context, but in this case, cataloging relies on predefined rules and noncontrolled dictionary terms, which means that the results are meaningful in terms of knowledge organization. The approach was tested on an Edisco database, and we measured the system’s ability to detect whether a new incoming record belonged to a specific set of textbooks. Full article
(This article belongs to the Special Issue Artificial Intelligence for Digital Humanities (AI4DH))
Show Figures

Figure 1

26 pages, 872 KiB  
Article
Symbolic AI for XAI: Evaluating LFIT Inductive Programming for Explaining Biases in Machine Learning
by Alfonso Ortega, Julian Fierrez, Aythami Morales, Zilong Wang, Marina de la Cruz, César Luis Alonso and Tony Ribeiro
Computers 2021, 10(11), 154; https://doi.org/10.3390/computers10110154 - 17 Nov 2021
Cited by 6 | Viewed by 4263
Abstract
Machine learning methods are growing in relevance for biometrics and personal information processing in domains such as forensics, e-health, recruitment, and e-learning. In these domains, white-box (human-readable) explanations of systems built on machine learning methods become crucial. Inductive logic programming (ILP) is a [...] Read more.
Machine learning methods are growing in relevance for biometrics and personal information processing in domains such as forensics, e-health, recruitment, and e-learning. In these domains, white-box (human-readable) explanations of systems built on machine learning methods become crucial. Inductive logic programming (ILP) is a subfield of symbolic AI aimed to automatically learn declarative theories about the processing of data. Learning from interpretation transition (LFIT) is an ILP technique that can learn a propositional logic theory equivalent to a given black-box system (under certain conditions). The present work takes a first step to a general methodology to incorporate accurate declarative explanations to classic machine learning by checking the viability of LFIT in a specific AI application scenario: fair recruitment based on an automatic tool generated with machine learning methods for ranking Curricula Vitae that incorporates soft biometric information (gender and ethnicity). We show the expressiveness of LFIT for this specific problem and propose a scheme that can be applicable to other domains. In order to check the ability to cope with other domains no matter the machine learning paradigm used, we have done a preliminary test of the expressiveness of LFIT, feeding it with a real dataset about adult incomes taken from the US census, in which we consider the income level as a function of the rest of attributes to verify if LFIT can provide logical theory to support and explain to what extent higher incomes are biased by gender and ethnicity. Full article
(This article belongs to the Special Issue Explainable Artificial Intelligence for Biometrics 2021)
Show Figures

Figure 1

24 pages, 1234 KiB  
Article
Enhancing Robots Navigation in Internet of Things Indoor Systems
by Yahya Tashtoush, Israa Haj-Mahmoud, Omar Darwish, Majdi Maabreh, Belal Alsinglawi, Mahmoud Elkhodr and Nasser Alsaedi
Computers 2021, 10(11), 153; https://doi.org/10.3390/computers10110153 - 15 Nov 2021
Cited by 2 | Viewed by 2612
Abstract
In this study, an effective local minima detection and definition algorithm is introduced for a mobile robot navigating through unknown static environments. Furthermore, five approaches are presented and compared with the popular approach wall-following to pull the robot out of the local minima [...] Read more.
In this study, an effective local minima detection and definition algorithm is introduced for a mobile robot navigating through unknown static environments. Furthermore, five approaches are presented and compared with the popular approach wall-following to pull the robot out of the local minima enclosure namely; Random Virtual Target, Reflected Virtual Target, Global Path Backtracking, Half Path Backtracking, and Local Path Backtracking. The proposed approaches mainly depend on changing the target location temporarily to avoid the original target’s attraction force effect on the robot. Moreover, to avoid getting trapped in the same location, a virtual obstacle is placed to cover the local minima enclosure. To include the most common shapes of deadlock situations, the proposed approaches were evaluated in four different environments; V-shaped, double U-shaped, C-shaped, and cluttered environments. The results reveal that the robot, using any of the proposed approaches, requires fewer steps to reach the destination, ranging from 59 to 73 m on average, as opposed to the wall-following strategy, which requires an average of 732 m. On average, the robot with a constant speed and reflected virtual target approach takes 103 s, whereas the identical robot with a wall-following approach takes 907 s to complete the tasks. Using a fuzzy-speed robot, the duration for the wall-following approach is greatly reduced to 507 s, while the reflected virtual target may only need up to 20% of that time. More results and detailed comparisons are embedded in the subsequent sections. Full article
Show Figures

Figure 1

26 pages, 105467 KiB  
Article
Two-Bit Embedding Histogram-Prediction-Error Based Reversible Data Hiding for Medical Images with Smooth Area
by Ching-Yu Yang and Ja-Ling Wu
Computers 2021, 10(11), 152; https://doi.org/10.3390/computers10110152 - 12 Nov 2021
Cited by 4 | Viewed by 2397
Abstract
During medical treatment, personal privacy is involved and must be protected. Healthcare institutions have to keep medical images or health information secret unless they have permission from the data owner to disclose them. Reversible data hiding (RDH) is a technique that embeds metadata [...] Read more.
During medical treatment, personal privacy is involved and must be protected. Healthcare institutions have to keep medical images or health information secret unless they have permission from the data owner to disclose them. Reversible data hiding (RDH) is a technique that embeds metadata into an image and can be recovered without any distortion after the hidden data have been extracted. This work aims to develop a fully reversible two-bit embedding RDH algorithm with a large hiding capacity for medical images. Medical images can be partitioned into regions of interest (ROI) and regions of noninterest (RONI). ROI is informative with semantic meanings essential for clinical applications and diagnosis and cannot tolerate subtle changes. Therefore, we utilize histogram shifting and prediction error to embed metadata into RONI. In addition, our embedding algorithm minimizes the side effect to ROI as much as possible. To verify the effectiveness of the proposed approach, we benchmarked three types of medical images in DICOM format, namely X-ray photography (X-ray), computer tomography (CT), and magnetic resonance imaging (MRI). Experimental results show that most of the hidden data have been embedded in RONI, and the performance achieves high capacity and leaves less visible distortion to ROIs. Full article
Show Figures

Figure 1

24 pages, 1311 KiB  
Article
Solution of the Optimal Reactive Power Flow Problem Using a Discrete-Continuous CBGA Implemented in the DigSILENT Programming Language
by David Lionel Bernal-Romero, Oscar Danilo Montoya and Andres Arias-Londoño
Computers 2021, 10(11), 151; https://doi.org/10.3390/computers10110151 - 12 Nov 2021
Cited by 8 | Viewed by 2521
Abstract
The problem of the optimal reactive power flow in transmission systems is addressed in this research from the point of view of combinatorial optimization. A discrete-continuous version of the Chu & Beasley genetic algorithm (CBGA) is proposed to model continuous variables such as [...] Read more.
The problem of the optimal reactive power flow in transmission systems is addressed in this research from the point of view of combinatorial optimization. A discrete-continuous version of the Chu & Beasley genetic algorithm (CBGA) is proposed to model continuous variables such as voltage outputs in generators and reactive power injection in capacitor banks, as well as binary variables such as tap positions in transformers. The minimization of the total power losses is considered as the objective performance indicator. The main contribution in this research corresponds to the implementation of the CBGA in the DigSILENT Programming Language (DPL), which exploits the advantages of the power flow tool at a low computational effort. The solution of the optimal reactive power flow problem in power systems is a key task since the efficiency and secure operation of the whole electrical system depend on the adequate distribution of the reactive power in generators, transformers, shunt compensators, and transmission lines. To provide an efficient optimization tool for academics and power system operators, this paper selects the DigSILENT software, since this is widely used for power systems for industries and researchers. Numerical results in three IEEE test feeders composed of 6, 14, and 39 buses demonstrate the efficiency of the proposed CBGA in the DPL environment from DigSILENT to reduce the total grid power losses (between 21.17% to 37.62% of the benchmark case) considering four simulation scenarios regarding voltage regulation bounds and slack voltage outputs. In addition, the total processing times for the IEEE 6-, 14-, and 39-bus systems were 32.33 s, 49.45 s, and 138.88 s, which confirms the low computational effort of the optimization methods directly implemented in the DPL environment. Full article
(This article belongs to the Special Issue Computing, Electrical and Industrial Systems 2021)
Show Figures

Figure 1

27 pages, 4012 KiB  
Article
Machine Learning Cybersecurity Adoption in Small and Medium Enterprises in Developed Countries
by Nisha Rawindaran, Ambikesh Jayal and Edmond Prakash
Computers 2021, 10(11), 150; https://doi.org/10.3390/computers10110150 - 10 Nov 2021
Cited by 17 | Viewed by 7470
Abstract
In many developed countries, the usage of artificial intelligence (AI) and machine learning (ML) has become important in paving the future path in how data is managed and secured in the small and medium enterprises (SMEs) sector. SMEs in these developed countries have [...] Read more.
In many developed countries, the usage of artificial intelligence (AI) and machine learning (ML) has become important in paving the future path in how data is managed and secured in the small and medium enterprises (SMEs) sector. SMEs in these developed countries have created their own cyber regimes around AI and ML. This knowledge is tested daily in how these countries’ SMEs run their businesses and identify threats and attacks, based on the support structure of the individual country. Based on recent changes to the UK General Data Protection Regulation (GDPR), Brexit, and ISO standards requirements, machine learning cybersecurity (MLCS) adoption in the UK SME market has become prevalent and a good example to lean on, amongst other developed nations. Whilst MLCS has been successfully applied in many applications, including network intrusion detection systems (NIDs) worldwide, there is still a gap in the rate of adoption of MLCS techniques for UK SMEs. Other developed countries such as Spain and Australia also fall into this category, and similarities and differences to MLCS adoptions are discussed. Applications of how MLCS is applied within these SME industries are also explored. The paper investigates, using quantitative and qualitative methods, the challenges to adopting MLCS in the SME ecosystem, and how operations are managed to promote business growth. Much like security guards and policing in the real world, the virtual world is now calling on MLCS techniques to be embedded like secret service covert operations to protect data being distributed by the millions into cyberspace. This paper will use existing global research from multiple disciplines to identify gaps and opportunities for UK SME small business cyber security. This paper will also highlight barriers and reasons for low adoption rates of MLCS in SMEs and compare success stories of larger companies implementing MLCS. The methodology uses structured quantitative and qualitative survey questionnaires, distributed across an extensive participation pool directed to the SMEs’ management and technical and non-technical professionals using stratify methods. Based on the analysis and findings, this study reveals that from the primary data obtained, SMEs have the appropriate cybersecurity packages in place but are not fully aware of their potential. Secondary data collection was run in parallel to better understand how these barriers and challenges emerged, and why the rate of adoption of MLCS was very low. The paper draws the conclusion that help through government policies and processes coupled together with collaboration could minimize cyber threats in combatting hackers and malicious actors in trying to stay ahead of the game. These aspirations can be reached by ensuring that those involved have been well trained and understand the importance of communication when applying appropriate safety processes and procedures. This paper also highlights important funding gaps that could help raise cyber security awareness in the form of grants, subsidies, and financial assistance through various public sector policies and training. Lastly, SMEs’ lack of understanding of risks and impacts of cybercrime could lead to conflicting messages between cross-company IT and cybersecurity rules. Trying to find the right balance between this risk and impact, versus productivity impact and costs, could lead to UK SMES getting over these hurdles in this cyberspace in the quest for promoting the usage of MLCS. UK and Wales governments can use the research conducted in this paper to inform and adapt their policies to help UK SMEs become more secure from cyber-attacks and compare them to other developed countries also on the same future path. Full article
(This article belongs to the Special Issue Sensors and Smart Cities 2023)
Show Figures

Figure 1

17 pages, 3388 KiB  
Article
Requirements Elicitation for an Assistance System for Complexity Management in Product Development of SMEs during COVID-19: A Case Study
by Jan-Phillip Herrmann, Sebastian Imort, Christoph Trojanowski and Andreas Deuter
Computers 2021, 10(11), 149; https://doi.org/10.3390/computers10110149 - 10 Nov 2021
Cited by 4 | Viewed by 2747
Abstract
Technological progress, upcoming cyber-physical systems, and limited resources confront small and medium-sized enterprises (SMEs) with the challenge of complexity management in product development projects spanning over the entire product lifecycle. SMEs require a solution for documenting and analyzing the functional relationships between multiple [...] Read more.
Technological progress, upcoming cyber-physical systems, and limited resources confront small and medium-sized enterprises (SMEs) with the challenge of complexity management in product development projects spanning over the entire product lifecycle. SMEs require a solution for documenting and analyzing the functional relationships between multiple domains such as products, software, and processes. The German research project FuPEP “Funktionsorientiertes Komplexitätsmanagement in allen Phasen der Produktentstehung” aims to address this issue by developing an assistance system that supports product developers by visualizing functional relationships. This paper presents the methodology and results of the assistance system’s requirements elicitation with two SMEs. Conducting the elicitation during a global pandemic, we discuss its application using specific techniques in light of COVID-19. We model problems and their effects regarding complexity management in product development in a system dynamics model. The most important requirements and use cases elicited are presented, and the requirements elicitation methodology and results are discussed. Additionally, we present a multilayer software architecture design of the assistance system. Our case study suggests a relationship between fear of a missing project focus among project participants and the restriction of requirements elicitation techniques to those possible via web conferencing tools. Full article
(This article belongs to the Special Issue Feature Paper in Computers)
Show Figures

Graphical abstract

17 pages, 2832 KiB  
Article
Automatic Detection of Traffic Accidents from Video Using Deep Learning Techniques
by Sergio Robles-Serrano, German Sanchez-Torres and John Branch-Bedoya
Computers 2021, 10(11), 148; https://doi.org/10.3390/computers10110148 - 09 Nov 2021
Cited by 20 | Viewed by 9807
Abstract
According to worldwide statistics, traffic accidents are the cause of a high percentage of violent deaths. The time taken to send the medical response to the accident site is largely affected by the human factor and correlates with survival probability. Due to this [...] Read more.
According to worldwide statistics, traffic accidents are the cause of a high percentage of violent deaths. The time taken to send the medical response to the accident site is largely affected by the human factor and correlates with survival probability. Due to this and the wide use of video surveillance and intelligent traffic systems, an automated traffic accident detection approach becomes desirable for computer vision researchers. Nowadays, Deep Learning (DL)-based approaches have shown high performance in computer vision tasks that involve a complex features relationship. Therefore, this work develops an automated DL-based method capable of detecting traffic accidents on video. The proposed method assumes that traffic accident events are described by visual features occurring through a temporal way. Therefore, a visual features extraction phase, followed by a temporary pattern identification, compose the model architecture. The visual and temporal features are learned in the training phase through convolution and recurrent layers using built-from-scratch and public datasets. An accuracy of 98% is achieved in the detection of accidents in public traffic accident datasets, showing a high capacity in detection independent of the road structure. Full article
(This article belongs to the Special Issue Machine Learning for Traffic Modeling and Prediction)
Show Figures

Graphical abstract

21 pages, 1417 KiB  
Article
On the Optimization of Self-Organization and Self-Management Hardware Resource Allocation for Heterogeneous Clouds
by Konstantinos M. Giannoutakis, Christos K. Filelis-Papadopoulos, George A. Gravvanis and Dimitrios Tzovaras
Computers 2021, 10(11), 147; https://doi.org/10.3390/computers10110147 - 09 Nov 2021
Viewed by 1777
Abstract
There is a tendency, during the last years, to migrate from the traditional homogeneous clouds and centralized provisioning of resources to heterogeneous clouds with specialized hardware governed in a distributed and autonomous manner. The CloudLightning architecture proposed recently introduced a dynamic way to [...] Read more.
There is a tendency, during the last years, to migrate from the traditional homogeneous clouds and centralized provisioning of resources to heterogeneous clouds with specialized hardware governed in a distributed and autonomous manner. The CloudLightning architecture proposed recently introduced a dynamic way to provision heterogeneous cloud resources, by shifting the selection of underlying resources from the end-user to the system in an efficient way. In this work, an optimized Suitability Index and assessment function are proposed, along with their theoretical analysis, for improving the computational efficiency, energy consumption, service delivery and scalability of the distributed orchestration. The effectiveness of the proposed scheme is being evaluated with the use of simulation, by comparing the optimized methods with the original approach and the traditional centralized resource management, on real and synthetic High Performance Computing applications. Finally, numerical results are presented and discussed regarding the improvements over the defined evaluation criteria. Full article
(This article belongs to the Special Issue Real-Time Systems in Emerging IoT-Embedded Applications)
Show Figures

Figure 1

19 pages, 26345 KiB  
Article
Learning History Using Virtual and Augmented Reality
by Inmaculada Remolar, Cristina Rebollo and Jon A. Fernández-Moyano
Computers 2021, 10(11), 146; https://doi.org/10.3390/computers10110146 - 08 Nov 2021
Cited by 12 | Viewed by 6182
Abstract
Master lectures of history are usually quite boring for the students, and to keep their attention requires a great effort from teachers. Virtual and Augmented Reality have a clear potential in education and can solve this problem. Serious games that use immersive technologies [...] Read more.
Master lectures of history are usually quite boring for the students, and to keep their attention requires a great effort from teachers. Virtual and Augmented Reality have a clear potential in education and can solve this problem. Serious games that use immersive technologies allow students to visit and interact with environments dated in different ages. Taking this in mind, this article presents a playful virtual reality experience set in Ancient Rome that allows the user to learn concepts from that age. The virtual experience reproduces as accurately as possible the different buildings and civil constructions of the time, making it possible for the player to create Roman cities in a simple way. Once built, the user can visit them, accessing the buildings and being able to interact with the objects and characters that appear. Moreover, in order to learn more information about every building, users can visualize them using Augmented Reality using marker-based techniques. Different information has been included related to every building, such as their main uses, characteristics, or even some images that represent them. In order to evaluate the effectiveness of the developed experience, several experiments have been carried out, taking as sample Secondary School students. Initially, the game’s quality and playability has been evaluated and, subsequently, the motivation of the virtual learning experience in history. The results obtained support on the one hand its gameplay and attractiveness, and on the other, the student’s increased interest in studying history, as well as the greater fixation of different concepts treated in a playful experience. Full article
(This article belongs to the Special Issue Xtended or Mixed Reality (AR+VR) for Education)
Show Figures

Figure 1

25 pages, 512 KiB  
Article
In-Depth Analysis of Ransom Note Files
by Yassine Lemmou, Jean-Louis Lanet and El Mamoun Souidi
Computers 2021, 10(11), 145; https://doi.org/10.3390/computers10110145 - 08 Nov 2021
Cited by 3 | Viewed by 3469
Abstract
During recent years, many papers have been published on ransomware, but to the best of our knowledge, no previous academic studies have been conducted on ransom note files. In this paper, we present the results of a depth study on filenames and the [...] Read more.
During recent years, many papers have been published on ransomware, but to the best of our knowledge, no previous academic studies have been conducted on ransom note files. In this paper, we present the results of a depth study on filenames and the content of ransom files. We propose a prototype to identify the ransom files. Then we explore how the filenames and the content of these files can minimize the risk of ransomware encryption of some specified ransomware or increase the effectiveness of some ransomware detection tools. To achieve these objectives, two approaches are discussed in this paper. The first uses Latent Semantic Analysis (LSA) to check similarities between the contents of files. The second uses some Machine Learning models to classify the filenames into two classes—ransom filenames and benign filenames. Full article
Show Figures

Graphical abstract

15 pages, 4946 KiB  
Article
Design of CAN Bus Communication Interfaces for Forestry Machines
by Geoffrey Spencer, Frutuoso Mateus, Pedro Torres, Rogério Dionísio and Ricardo Martins
Computers 2021, 10(11), 144; https://doi.org/10.3390/computers10110144 - 08 Nov 2021
Cited by 8 | Viewed by 3592
Abstract
This paper presents the initial developments of new hardware devices targeted for CAN (Controller Area Network) bus communications in forest machines. CAN bus is a widely used protocol for communications in the automobile area. It is also applied in industrial vehicles and machines [...] Read more.
This paper presents the initial developments of new hardware devices targeted for CAN (Controller Area Network) bus communications in forest machines. CAN bus is a widely used protocol for communications in the automobile area. It is also applied in industrial vehicles and machines due to its robustness, simplicity, and operating flexibility. It is ideal for forestry machinery producers who need to couple their equipment to a machine that allows the transportation industry to recognize the importance of standardizing communications between tools and machines. One of the problems that producers sometimes face is a lack of flexibility in commercialized hardware modules; for example, in interfaces for sensors and actuators that guarantee scalability depending on the new functionalities required. The hardware device presented in this work is designed to overcome these limitations and provide the flexibility to standardize communications while allowing scalability in the development of new products and features. The work is being developed within the scope of the research project “SMARTCUT—Remote Diagnosis, Maintenance and Simulators for Operation Training and Maintenance of Forest Machines”, to incorporate innovative technologies in forest machines produced by the CUTPLANT S.A. It consists of an experimental system based on the PIC18F26K83 microcontroller to form a CAN node to transmit and receive digital and analog messages via CAN bus, tested and validated by the communication between different nodes. The main contribution of the paper focuses on the presentation of the development of new CAN bus electronic control units designed to enable remote communication between sensors and actuators, and the main controller of forest machines. Full article
Show Figures

Graphical abstract

23 pages, 4076 KiB  
Article
Estimating Interpersonal Distance and Crowd Density with a Single-Edge Camera
by Alem Fitwi, Yu Chen, Han Sun and Robert Harrod
Computers 2021, 10(11), 143; https://doi.org/10.3390/computers10110143 - 05 Nov 2021
Cited by 7 | Viewed by 2627
Abstract
For public safety and physical security, currently more than a billion closed-circuit television (CCTV) cameras are in use around the world. Proliferation of artificial intelligence (AI) and machine/deep learning (M/DL) technologies have gained significant applications including crowd surveillance. The state-of-the-art distance and area [...] Read more.
For public safety and physical security, currently more than a billion closed-circuit television (CCTV) cameras are in use around the world. Proliferation of artificial intelligence (AI) and machine/deep learning (M/DL) technologies have gained significant applications including crowd surveillance. The state-of-the-art distance and area estimation algorithms either need multiple cameras or a reference object as a ground truth. It is an open question to obtain an estimation using a single camera without a scale reference. In this paper, we propose a novel solution called E-SEC, which estimates interpersonal distance between a pair of dynamic human objects, area occupied by a dynamic crowd, and density using a single edge camera. The E-SEC framework comprises edge CCTV cameras responsible for capturing a crowd on video frames leveraging a customized YOLOv3 model for human detection. E-SEC contributes an interpersonal distance estimation algorithm vital for monitoring the social distancing of a crowd, and an area estimation algorithm for dynamically determining an area occupied by a crowd with changing size and position. A unified output module generates the crowd size, interpersonal distances, social distancing violations, area, and density per every frame. Experimental results validate the accuracy and efficiency of E-SEC with a range of different video datasets. Full article
(This article belongs to the Special Issue Feature Paper in Computers)
Show Figures

Figure 1

14 pages, 2664 KiB  
Article
An Architecture for Distributed Electronic Documents Storage in Decentralized Blockchain B2B Applications
by Obadah Hammoud, Ivan Tarkhanov and Artyom Kosmarski
Computers 2021, 10(11), 142; https://doi.org/10.3390/computers10110142 - 04 Nov 2021
Cited by 5 | Viewed by 1949
Abstract
This paper investigates the problem of distributed storage of electronic documents (both metadata and files) in decentralized blockchain-based b2b systems (DApps). The need to reduce the cost of implementing such systems and the insufficient elaboration of the issue of storing big data in [...] Read more.
This paper investigates the problem of distributed storage of electronic documents (both metadata and files) in decentralized blockchain-based b2b systems (DApps). The need to reduce the cost of implementing such systems and the insufficient elaboration of the issue of storing big data in DLT are considered. An approach for building such systems is proposed, which allows optimizing the size of the required storage (by using Erasure coding) and simultaneously providing secure data storage in geographically distributed systems of a company, or within a consortium of companies. The novelty of this solution is that we are the first who combine enterprise DLT with distributed file storage, in which the availability of files is controlled. The results of our experiment demonstrate that the speed of the described DApp is comparable to known b2c torrent projects, and subsequently justify the choice of Hyperledger Fabric and Ethereum Enterprise for its use. Obtained test results show that public blockchain networks are not suitable for creating such a b2b system. The proposed system solves the main challenges of distributed data storage by grouping data into clusters and managing them with a load balancer, while preventing data tempering using a blockchain network. The considered DApps storage methodology easily scales horizontally in terms of distributed file storage and can be deployed on cloud computing technologies, while minimizing the required storage space. We compare this approach with known methods of file storage in distributed systems, including central storage, torrents, IPFS, and Storj. The reliability of this approach is calculated and the result is compared to traditional solutions based on full backup. Full article
(This article belongs to the Special Issue Integration of Cloud Computing and IoT)
Show Figures

Figure 1

11 pages, 1989 KiB  
Article
Employee Attrition Prediction Using Deep Neural Networks
by Salah Al-Darraji, Dhafer G. Honi, Francesca Fallucchi, Ayad I. Abdulsada, Romeo Giuliano and Husam A. Abdulmalik
Computers 2021, 10(11), 141; https://doi.org/10.3390/computers10110141 - 03 Nov 2021
Cited by 12 | Viewed by 6098
Abstract
Decision-making plays an essential role in the management and may represent the most important component in the planning process. Employee attrition is considered a well-known problem that needs the right decisions from the administration to preserve high qualified employees. Interestingly, artificial intelligence is [...] Read more.
Decision-making plays an essential role in the management and may represent the most important component in the planning process. Employee attrition is considered a well-known problem that needs the right decisions from the administration to preserve high qualified employees. Interestingly, artificial intelligence is utilized extensively as an efficient tool for predicting such a problem. The proposed work utilizes the deep learning technique along with some preprocessing steps to improve the prediction of employee attrition. Several factors lead to employee attrition. Such factors are analyzed to reveal their intercorrelation and to demonstrate the dominant ones. Our work was tested using the imbalanced dataset of IBM analytics, which contains 35 features for 1470 employees. To get realistic results, we derived a balanced version from the original one. Finally, cross-validation is implemented to evaluate our work precisely. Extensive experiments have been conducted to show the practical value of our work. The prediction accuracy using the original dataset is about 91%, whereas it is about 94% using a synthetic dataset. Full article
(This article belongs to the Special Issue Feature Paper in Computers)
Show Figures

Figure 1

12 pages, 2175 KiB  
Article
A Cognitive Diagnostic Module Based on the Repair Theory for a Personalized User Experience in E-Learning Software
by Akrivi Krouska, Christos Troussas and Cleo Sgouropoulou
Computers 2021, 10(11), 140; https://doi.org/10.3390/computers10110140 - 29 Oct 2021
Cited by 11 | Viewed by 2010
Abstract
This paper presents a novel cognitive diagnostic module which is incorporated in e-learning software for the tutoring of the markup language HTML. The system is responsible for detecting the learners’ cognitive bugs and delivering personalized guidance. The novelty of this approach is that [...] Read more.
This paper presents a novel cognitive diagnostic module which is incorporated in e-learning software for the tutoring of the markup language HTML. The system is responsible for detecting the learners’ cognitive bugs and delivering personalized guidance. The novelty of this approach is that it is based on the Repair theory that incorporates additional features, such as student negligence and test completion times, in its diagnostic mechanism; also, it employs a recommender module that suggests students optimal learning paths based on their misconceptions using descriptive test feedback and adaptability of learning content. Considering the Repair theory, the diagnostic mechanism uses a library of error correction rules to explain the cause of errors observed by the student during the assessment. This library covers common errors, creating a hypothesis space in that way. Therefore, the test items are expanded, so that they belong to the hypothesis space. Both the system and the cognitive diagnostic tool were evaluated with promising results, showing that they offer a personalized experience to learners. Full article
(This article belongs to the Special Issue Present and Future of E-Learning Technologies)
Show Figures

Figure 1

17 pages, 8197 KiB  
Article
Brain Tumor Segmentation of MRI Images Using Processed Image Driven U-Net Architecture
by Anuja Arora, Ambikesh Jayal, Mayank Gupta, Prakhar Mittal and Suresh Chandra Satapathy
Computers 2021, 10(11), 139; https://doi.org/10.3390/computers10110139 - 28 Oct 2021
Cited by 19 | Viewed by 7798
Abstract
Brain tumor segmentation seeks to separate healthy tissue from tumorous regions. This is an essential step in diagnosis and treatment planning to maximize the likelihood of successful treatment. Magnetic resonance imaging (MRI) provides detailed information about brain tumor anatomy, making it an important [...] Read more.
Brain tumor segmentation seeks to separate healthy tissue from tumorous regions. This is an essential step in diagnosis and treatment planning to maximize the likelihood of successful treatment. Magnetic resonance imaging (MRI) provides detailed information about brain tumor anatomy, making it an important tool for effective diagnosis which is requisite to replace the existing manual detection system where patients rely on the skills and expertise of a human. In order to solve this problem, a brain tumor segmentation & detection system is proposed where experiments are tested on the collected BraTS 2018 dataset. This dataset contains four different MRI modalities for each patient as T1, T2, T1Gd, and FLAIR, and as an outcome, a segmented image and ground truth of tumor segmentation, i.e., class label, is provided. A fully automatic methodology to handle the task of segmentation of gliomas in pre-operative MRI scans is developed using a U-Net-based deep learning model. The first step is to transform input image data, which is further processed through various techniques—subset division, narrow object region, category brain slicing, watershed algorithm, and feature scaling was done. All these steps are implied before entering data into the U-Net Deep learning model. The U-Net Deep learning model is used to perform pixel label segmentation on the segment tumor region. The algorithm reached high-performance accuracy on the BraTS 2018 training, validation, as well as testing dataset. The proposed model achieved a dice coefficient of 0.9815, 0.9844, 0.9804, and 0.9954 on the testing dataset for sets HGG-1, HGG-2, HGG-3, and LGG-1, respectively. Full article
(This article belongs to the Special Issue Advances of Machine and Deep Learning in the Health Domain)
Show Figures

Graphical abstract

16 pages, 2938 KiB  
Article
Evaluating GraphQL and REST API Services Performance in a Massive and Intensive Accessible Information System
by Armin Lawi, Benny L. E. Panggabean and Takaichi Yoshida
Computers 2021, 10(11), 138; https://doi.org/10.3390/computers10110138 - 27 Oct 2021
Cited by 11 | Viewed by 5981
Abstract
Currently, most middleware application developers have two choices when designing or implementing Application Programming Interface (API) services; i.e., they can either stick with Representational State Transfer (REST) or explore the emerging GraphQL technology. Although REST is widely regarded as the standard method for [...] Read more.
Currently, most middleware application developers have two choices when designing or implementing Application Programming Interface (API) services; i.e., they can either stick with Representational State Transfer (REST) or explore the emerging GraphQL technology. Although REST is widely regarded as the standard method for API development, GraphQL is believed to be revolutionary in overcoming the main drawbacks of REST, especially data-fetching issues. Nevertheless, doubts still remain, as there are no investigations with convincing results in evaluating the performance of the two services. This paper proposes a new research methodology to evaluate the performance of REST and GraphQL API services with two main ideas as novelties. The first novel method is the evaluation of the two services is performed on the real ongoing operation of the management information system, where massive and intensive query transactions take place on a complex database with many relationships. The second is the fair and independent performance evaluation results obtained by distributing client requests and synchronizing the service responses on the two virtually separated parallel execution paths for each API service, respectively. The performance evaluation was investigated using basic measures of QoS (Quality of Services), i.e., response time, throughput, CPU load, and memory usage. We use the term efficiency in comparing the evaluation results to capture differences in their performance measures. The statistical hypothesis parameters test using the two-tails paired t-test, and boxplot visualization was also given to confirm the significance of the comparison results. The results showed REST is still faster up to 50.50% in response time and 37.16% for throughput, while GraphQL is very efficient in resource utilization, i.e., 37.26% for CPU load and 39.74% for memory utilization. Therefore, GraphQL is the right choice when data requirements change frequently, and resource utilization is the most important consideration. REST is used when some data are frequently accessed and called by multiple requests. Full article
Show Figures

Figure 1

15 pages, 288 KiB  
Article
Affecting Young Children’s Knowledge, Attitudes, and Behaviors for Ultraviolet Radiation Protection through the Internet of Things: A Quasi-Experimental Study
by Sotiroula Theodosi and Iolie Nicolaidou
Computers 2021, 10(11), 137; https://doi.org/10.3390/computers10110137 - 25 Oct 2021
Cited by 6 | Viewed by 2675
Abstract
Prolonged exposure to ultraviolet (UV) radiation is linked to skin cancer. Children are more vulnerable to UV harmful effects compared to adults. Children’s active involvement in using Internet of Things (IoT) devices to collect and analyze real-time UV radiation data is suggested to [...] Read more.
Prolonged exposure to ultraviolet (UV) radiation is linked to skin cancer. Children are more vulnerable to UV harmful effects compared to adults. Children’s active involvement in using Internet of Things (IoT) devices to collect and analyze real-time UV radiation data is suggested to increase their awareness of UV protection. This quasi-experimental pre-test post-test control group study implemented light sensors in a STEM inquiry-based learning environment focusing on UV radiation and protection in primary education. This exploratory, small-scale study investigated the effect of a STEM environment implementing IoT devices on 6th graders’ knowledge, attitudes, and behaviors about UV radiation and protection. Participants were 31 primary school students. Experimental group participants (n = 15) attended four eighty-minute inquiry-based lessons on UV radiation and protection and used sensors to measure and analyze UV radiation in their school. Data sources included questionnaires on UV knowledge, attitudes, and behaviors administered pre- and post-intervention. Statistically significant learning gains were found only for the experimental group (t14 = −3.64, p = 0.003). A statistically significant positive behavioral change was reported for experimental group participants six weeks post-intervention. The study adds empirical evidence suggesting the value of real-time data-driven approaches implementing IoT devices to positively influence students’ knowledge and behaviors related to socio-scientific problems affecting their health. Full article
18 pages, 2561 KiB  
Article
B-MFO: A Binary Moth-Flame Optimization for Feature Selection from Medical Datasets
by Mohammad H. Nadimi-Shahraki, Mahdis Banaie-Dezfouli, Hoda Zamani, Shokooh Taghian and Seyedali Mirjalili
Computers 2021, 10(11), 136; https://doi.org/10.3390/computers10110136 - 25 Oct 2021
Cited by 91 | Viewed by 3760
Abstract
Advancements in medical technology have created numerous large datasets including many features. Usually, all captured features are not necessary, and there are redundant and irrelevant features, which reduce the performance of algorithms. To tackle this challenge, many metaheuristic algorithms are used to select [...] Read more.
Advancements in medical technology have created numerous large datasets including many features. Usually, all captured features are not necessary, and there are redundant and irrelevant features, which reduce the performance of algorithms. To tackle this challenge, many metaheuristic algorithms are used to select effective features. However, most of them are not effective and scalable enough to select effective features from large medical datasets as well as small ones. Therefore, in this paper, a binary moth-flame optimization (B-MFO) is proposed to select effective features from small and large medical datasets. Three categories of B-MFO were developed using S-shaped, V-shaped, and U-shaped transfer functions to convert the canonical MFO from continuous to binary. These categories of B-MFO were evaluated on seven medical datasets and the results were compared with four well-known binary metaheuristic optimization algorithms: BPSO, bGWO, BDA, and BSSA. In addition, the convergence behavior of the B-MFO and comparative algorithms were assessed, and the results were statistically analyzed using the Friedman test. The experimental results demonstrate a superior performance of B-MFO in solving the feature selection problem for different medical datasets compared to other comparative algorithms. Full article
(This article belongs to the Special Issue Advances of Machine and Deep Learning in the Health Domain)
Show Figures

Graphical abstract

8 pages, 205 KiB  
Editorial
Blockchain and Recordkeeping: Editorial
by Victoria L. Lemieux
Computers 2021, 10(11), 135; https://doi.org/10.3390/computers10110135 - 20 Oct 2021
Cited by 5 | Viewed by 2884
Abstract
Distributed ledger technologies (DLT), including blockchains, combine the use of cryptography and distributed networks to achieve a novel form of records creation and keeping designed for tamper-resistance and immutability. Over the past several years, these capabilities have made DLTs, including blockchains, increasingly popular [...] Read more.
Distributed ledger technologies (DLT), including blockchains, combine the use of cryptography and distributed networks to achieve a novel form of records creation and keeping designed for tamper-resistance and immutability. Over the past several years, these capabilities have made DLTs, including blockchains, increasingly popular as a general-purpose technology used for recordkeeping in a variety of sectors and industry domains, yet many open challenges and issues, both theoretical and applied, remain. This editorial introduces the Special Issue of Computers focusing on exploring the frontiers of blockchain/distributed ledger technology and recordkeeping. Full article
(This article belongs to the Special Issue Blockchain Technology and Recordkeeping)
Previous Issue
Next Issue
Back to TopTop