Next Issue
Volume 11, December
Previous Issue
Volume 11, October
 
 

Computers, Volume 11, Issue 11 (November 2022) – 15 articles

Cover Story (view full-size image): An intelligent robotic welding system using a vision computer to detect the edges of plates. Here, we focus on a new approach using image processing to detect welding lines by tracking the edges of plates according to the required speed by a three-degrees-of-freedom robotic arm. The two different algorithms achieved in the developed approach are edge detection and top-hat transformation. An adaptive neuro-fuzzy inference system ANFIS was used to choose the robot's best forward and inverse kinematics. MIG welding at the end-effector was applied as a tool in this system, and the weld was completed according to the required working conditions and performance. The parts of the system work with compatible and consistent performances, with acceptable accuracy for tracking the line of the welding path. View this paper
  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Reader to open them.
Order results
Result details
Section
Select all
Export citation of selected articles as:
12 pages, 1445 KiB  
Article
Understanding Bitcoin Price Prediction Trends under Various Hyperparameter Configurations
by Jun-Ho Kim and Hanul Sung
Computers 2022, 11(11), 167; https://doi.org/10.3390/computers11110167 - 21 Nov 2022
Cited by 4 | Viewed by 1754
Abstract
Since bitcoin has gained recognition as a valuable asset, researchers have begun to use machine learning to predict bitcoin price. However, because of the impractical cost of hyperparameter optimization, it is greatly challenging to make accurate predictions. In this paper, we analyze the [...] Read more.
Since bitcoin has gained recognition as a valuable asset, researchers have begun to use machine learning to predict bitcoin price. However, because of the impractical cost of hyperparameter optimization, it is greatly challenging to make accurate predictions. In this paper, we analyze the prediction performance trends under various hyperparameter configurations to help them identify the optimal hyperparameter combination with little effort. We employ two datasets which have different time periods with the same bitcoin price to analyze the prediction performance based on the similarity between the data used for learning and future data. With them, we measure the loss rates between predicted values and real price by adjusting the values of three representative hyperparameters. Through the analysis, we show that distinct hyperparameter configurations are needed for a high prediction accuracy according to the similarity between the data used for learning and the future data. Based on the result, we propose a direction for the hyperparameter optimization of the bitcoin price prediction showing a high accuracy. Full article
(This article belongs to the Special Issue BLockchain Enabled Sustainable Smart Cities (BLESS 2022))
Show Figures

Figure 1

15 pages, 3083 KiB  
Article
Digital Twin in the Provision of Power Wheelchairs Context: Support for Technical Phases and Conceptual Model
by Carolina Lagartinho-Oliveira, Filipe Moutinho and Luís Gomes
Computers 2022, 11(11), 166; https://doi.org/10.3390/computers11110166 - 19 Nov 2022
Cited by 1 | Viewed by 1716
Abstract
Worldwide, many wheelchair users find it difficult to use or acquire a wheelchair that is appropriate for them, either because they do not have the necessary financial support or because they do not have access to trained healthcare professionals (HCPs), but they are [...] Read more.
Worldwide, many wheelchair users find it difficult to use or acquire a wheelchair that is appropriate for them, either because they do not have the necessary financial support or because they do not have access to trained healthcare professionals (HCPs), but they are essential for the correct provision of assistive products and user training. Consequently, although wheelchairs are designed to promote the well-being of many users, in many cases, they end up being abandoned or do not provide any benefit, with the chance of causing harm and potentially putting people in danger. This article proposes the creation and use of a Digital Twin (DT) of a Power Wheelchair (PWC) to promote the health of wheelchair users, by facilitating and improving the delivery of remote services by HCPs, as well as to include monitoring services to support timely maintenance. Specifically, a DT is a virtual counterpart that is seamlessly linked to a physical asset, both relying on data and information exchange for mirroring each other. Currently, DT is emerging and being applied to different areas as a promising approach to gather insightful data, which are shared between the physical and virtual worlds and facilitate the means to design, monitor, analyze, optimize, predict, and control physical entities. This article gives an overview of the Digital Twin concept, namely its definition, types, and properties, and seeks to synthesize the technologies and tools frequently used to enable Digital Twins; we also explain how a DT can be used in the technical phases of the PWC provision process and propose a conceptual model highlighting the use of an MDD approach benefiting from a Petri net formalism, which is presented to systematize the development of a PWC DT. Full article
(This article belongs to the Special Issue Computing, Electrical and Industrial Systems 2022)
Show Figures

Graphical abstract

23 pages, 2346 KiB  
Article
Implementation of a C Library of Kalman Filters for Application on Embedded Systems
by Christina Schreppel, Andreas Pfeiffer, Julian Ruggaber and Jonathan Brembeck
Computers 2022, 11(11), 165; https://doi.org/10.3390/computers11110165 - 18 Nov 2022
Cited by 2 | Viewed by 3344
Abstract
Having knowledge about the states of a system is an important component in most control systems. However, an exact measurement of the states cannot always be provided because it is either not technically possible or only possible with a significant effort. Therefore, state [...] Read more.
Having knowledge about the states of a system is an important component in most control systems. However, an exact measurement of the states cannot always be provided because it is either not technically possible or only possible with a significant effort. Therefore, state estimation plays an important role in control applications. The well-known and widely used Kalman filter is often employed for this purpose. This paper describes the implementation of nonlinear Kalman filter algorithms, the extended and the unscented Kalman filter with square-rooting, in the programming language C, that are suitable for the use on embedded systems. The implementations deal with single or double precision data types depending on the application. The newly implemented filters are demonstrated in the context of semi-active vehicle damper control and the estimation of the tire–road friction coefficient as application examples, providing real-time capability. Their per-formances were evaluated in tests on an electronic control unit and a rapid-prototyping platform. Full article
(This article belongs to the Special Issue Real-Time Embedded Systems in IoT)
Show Figures

Figure 1

20 pages, 934 KiB  
Article
Arbitrarily Parallelizable Code: A Model of Computation Evaluated on a Message-Passing Many-Core System
by Sebastien Cook and Paulo Garcia
Computers 2022, 11(11), 164; https://doi.org/10.3390/computers11110164 - 18 Nov 2022
Viewed by 1211
Abstract
The number of processing elements per solution is growing. From embedded devices now employing (often heterogeneous) multi-core processors, across many-core scientific computing platforms, to distributed systems comprising thousands of interconnected processors, parallel programming of one form or another is now the norm. Understanding [...] Read more.
The number of processing elements per solution is growing. From embedded devices now employing (often heterogeneous) multi-core processors, across many-core scientific computing platforms, to distributed systems comprising thousands of interconnected processors, parallel programming of one form or another is now the norm. Understanding how to efficiently parallelize code, however, is still an open problem, and the difficulties are exacerbated across heterogeneous processing, and especially at run time, when it is sometimes desirable to change the parallelization strategy to meet non-functional requirements (e.g., load balancing and power consumption). In this article, we investigate the use of a programming model based on series-parallel partial orders: computations are expressed as directed graphs that expose parallelization opportunities and necessary sequencing by construction. This programming model is suitable as an intermediate representation for higher-level languages. We then describe a model of computation for such a programming model that maps such graphs into a stack-based structure more amenable to hardware processing. We describe the formal small-step semantics for this model of computation and use this formal description to show that the model can be arbitrarily parallelized, at compile and runtime, with correct execution guaranteed by design. We empirically support this claim and evaluate parallelization benefits using a prototype open-source compiler, targeting a message-passing many-core simulation. We empirically verify the correctness of arbitrary parallelization, supporting the validity of our formal semantics, analyze the distribution of operations within cores to understand the implementation impact of the paradigm, and assess execution time improvements when five micro-benchmarks are automatically and randomly parallelized across 2 × 2 and 4 × 4 multi-core configurations, resulting in execution time decrease by up to 95% in the best case. Full article
(This article belongs to the Special Issue Real-Time Embedded Systems in IoT)
Show Figures

Figure 1

14 pages, 575 KiB  
Review
A Short Survey on Deep Learning for Multimodal Integration: Applications, Future Perspectives and Challenges
by Giovanna Maria Dimitri
Computers 2022, 11(11), 163; https://doi.org/10.3390/computers11110163 - 18 Nov 2022
Cited by 4 | Viewed by 4998
Abstract
Deep learning has achieved state-of-the-art performances in several research applications nowadays: from computer vision to bioinformatics, from object detection to image generation. In the context of such newly developed deep-learning approaches, we can define the concept of multimodality. The objective of this research [...] Read more.
Deep learning has achieved state-of-the-art performances in several research applications nowadays: from computer vision to bioinformatics, from object detection to image generation. In the context of such newly developed deep-learning approaches, we can define the concept of multimodality. The objective of this research field is to implement methodologies which can use several modalities as input features to perform predictions. In this, there is a strong analogy with respect to what happens with human cognition, since we rely on several different senses to make decisions. In this article, we present a short survey on multimodal integration using deep-learning methods. In a first instance, we comprehensively review the concept of multimodality, describing it from a two-dimensional perspective. First, we provide, in fact, a taxonomical description of the multimodality concept. Secondly, we define the second multimodality dimension as the one describing the fusion approaches in multimodal deep learning. Eventually, we describe four applications of multimodal deep learning to the following fields of research: speech recognition, sentiment analysis, forensic applications and image processing. Full article
(This article belongs to the Special Issue Feature Papers in Computers 2023)
Show Figures

Figure 1

11 pages, 2796 KiB  
Article
Robust Automatic Modulation Classification Using Convolutional Deep Neural Network Based on Scalogram Information
by Ahmed Mohammed Abdulkarem, Firas Abedi, Hayder M. A. Ghanimi, Sachin Kumar, Waleed Khalid Al-Azzawi, Ali Hashim Abbas, Ali S. Abosinnee, Ihab Mahdi Almaameri and Ahmed Alkhayyat
Computers 2022, 11(11), 162; https://doi.org/10.3390/computers11110162 - 15 Nov 2022
Cited by 10 | Viewed by 2045
Abstract
This study proposed a two-stage method, which combines a convolutional neural network (CNN) with the continuous wavelet transform (CWT) for multiclass modulation classification. The modulation signals’ time-frequency information was first extracted using CWT as a data source. The convolutional neural network was fed [...] Read more.
This study proposed a two-stage method, which combines a convolutional neural network (CNN) with the continuous wavelet transform (CWT) for multiclass modulation classification. The modulation signals’ time-frequency information was first extracted using CWT as a data source. The convolutional neural network was fed input from 2D pictures. The second step included feeding the proposed algorithm the 2D time-frequency information it had obtained in order to classify the different kinds of modulations. Six different types of modulations, including amplitude-shift keying (ASK), phase-shift keying (PSK), frequency-shift keying (FSK), quadrature amplitude-shift keying (QASK), quadrature phase-shift keying (QPSK), and quadrature frequency-shift keying (QFSK), are automatically recognized using a new digital modulation classification model between 0 and 25 dB SNRs. Modulation types are used in satellite communication, underwater communication, and military communication. In comparison with earlier research, the recommended convolutional neural network learning model performs better in the presence of varying noise levels. Full article
Show Figures

Figure 1

18 pages, 1948 KiB  
Article
Learning-Based Matched Representation System for Job Recommendation
by Suleiman Ali Alsaif, Minyar Sassi Hidri, Hassan Ahmed Eleraky, Imen Ferjani and Rimah Amami
Computers 2022, 11(11), 161; https://doi.org/10.3390/computers11110161 - 14 Nov 2022
Cited by 8 | Viewed by 3416
Abstract
Job recommender systems (JRS) are a subclass of information filtering systems that aims to help job seekers identify what might match their skills and experiences and prevent them from being lost in the vast amount of information available on job boards that aggregates [...] Read more.
Job recommender systems (JRS) are a subclass of information filtering systems that aims to help job seekers identify what might match their skills and experiences and prevent them from being lost in the vast amount of information available on job boards that aggregates postings from many sources such as LinkedIn or Indeed. A variety of strategies used as part of JRS have been implemented, most of them failed to recommend job vacancies that fit properly to the job seekers profiles when dealing with more than one job offer. They consider skills as passive entities associated with the job description, which need to be matched for finding the best job recommendation. This paper provides a recommender system to assist job seekers in finding suitable jobs based on their resumes. The proposed system recommends the top-n jobs to the job seekers by analyzing and measuring similarity between the job seeker’s skills and explicit features of job listing using content-based filtering. First-hand information was gathered by scraping jobs description from Indeed from major cities in Saudi Arabia (Dammam, Jeddah, and Riyadh). Then, the top skills required in job offers were analyzed and job recommendation was made by matching skills from resumes to posted jobs. To quantify recommendation success and error rates, we sought to compare the results of our system to reality using decision support measures. Full article
(This article belongs to the Topic Artificial Intelligence Models, Tools and Applications)
Show Figures

Figure 1

15 pages, 6387 KiB  
Article
Features Engineering for Malware Family Classification Based API Call
by Ammar Yahya Daeef, Ali Al-Naji and Javaan Chahl
Computers 2022, 11(11), 160; https://doi.org/10.3390/computers11110160 - 11 Nov 2022
Cited by 6 | Viewed by 2250
Abstract
Malware is used to carry out malicious operations on networks and computer systems. Consequently, malware classification is crucial for preventing malicious attacks. Application programming interfaces (APIs) are ideal candidates for characterizing malware behavior. However, the primary challenge is to produce API call features [...] Read more.
Malware is used to carry out malicious operations on networks and computer systems. Consequently, malware classification is crucial for preventing malicious attacks. Application programming interfaces (APIs) are ideal candidates for characterizing malware behavior. However, the primary challenge is to produce API call features for classification algorithms to achieve high classification accuracy. To achieve this aim, this work employed the Jaccard similarity and visualization analysis to find the hidden patterns created by various malware API calls. Traditional machine learning classifiers, i.e., random forest (RF), support vector machine (SVM), and k-nearest neighborhood (KNN), were used in this research as alternatives to existing neural networks, which use millions of length API call sequences. The benchmark dataset used in this study contains 7107 samples of API call sequences (labeled to eight different malware families). The results showed that RF with the proposed API call features outperformed the LSTM (long short-term memory) and gated recurrent unit (GRU)-based methods against overall evaluation metrics. Full article
(This article belongs to the Special Issue Human Understandable Artificial Intelligence)
Show Figures

Figure 1

4 pages, 230 KiB  
Editorial
From Mean Time to Failure to Mean Time to Attack/Compromise: Incorporating Reliability into Cybersecurity
by Leandros Maglaras
Computers 2022, 11(11), 159; https://doi.org/10.3390/computers11110159 - 08 Nov 2022
Cited by 3 | Viewed by 1204
Abstract
Around the world, numerous companies strive to successfully facilitate digital transformation [...] Full article
(This article belongs to the Section ICT Infrastructures for Cybersecurity)
Show Figures

Figure 1

21 pages, 2066 KiB  
Article
A Ranking Learning Model by K-Means Clustering Technique for Web Scraped Movie Data
by Kamal Uddin Sarker, Mohammed Saqib, Raza Hasan, Salman Mahmood, Saqib Hussain, Ali Abbas and Aziz Deraman
Computers 2022, 11(11), 158; https://doi.org/10.3390/computers11110158 - 08 Nov 2022
Cited by 4 | Viewed by 3650
Abstract
Business organizations experience cut-throat competition in the e-commerce era, where a smart organization needs to come up with faster innovative ideas to enjoy competitive advantages. A smart user decides from the review information of an online product. Data-driven smart machine learning applications use [...] Read more.
Business organizations experience cut-throat competition in the e-commerce era, where a smart organization needs to come up with faster innovative ideas to enjoy competitive advantages. A smart user decides from the review information of an online product. Data-driven smart machine learning applications use real data to support immediate decision making. Web scraping technologies support supplying sufficient relevant and up-to-date well-structured data from unstructured data sources like websites. Machine learning applications generate models for in-depth data analysis and decision making. The Internet Movie Database (IMDB) is one of the largest movie databases on the internet. IMDB movie information is applied for statistical analysis, sentiment classification, genre-based clustering, and rating-based clustering with respect to movie release year, budget, etc., for repository dataset. This paper presents a novel clustering model with respect to two different rating systems of IMDB movie data. This work contributes to the three areas: (i) the “grey area” of web scraping to extract data for research purposes; (ii) statistical analysis to correlate required data fields and understanding purposes of implementation machine learning, (iii) k-means clustering is applied for movie critics rank (Metascore) and users’ star rank (Rating). Different python libraries are used for web data scraping, data analysis, data visualization, and k-means clustering application. Only 42.4% of records were accepted from the extracted dataset for research purposes after cleaning. Statistical analysis showed that votes, ratings, Metascore have a linear relationship, while random characteristics are observed for income of the movie. On the other hand, experts’ feedback (Metascore) and customers’ feedback (Rating) are negatively correlated (−0.0384) due to the biasness of additional features like genre, actors, budget, etc. Both rankings have a nonlinear relationship with the income of the movies. Six optimal clusters were selected by elbow technique and the calculated silhouette score is 0.4926 for the proposed k-means clustering model and we found that only one cluster is in the logical relationship of two rankings systems. Full article
Show Figures

Figure 1

16 pages, 620 KiB  
Article
Students and Teachers’ Need for Sustainable Education: Lessons from the Pandemic
by Manuel Caeiro-Rodríguez, Mario Manso-Vázquez, Triinu Jesmin, Jaanus Terasmaa, Hariklia Tsalapata, Olivier Heidmann, Jussi Okkonen, Edward White, Carlos Vaz de Carvalho and Ioana-Andreea Stefan
Computers 2022, 11(11), 157; https://doi.org/10.3390/computers11110157 - 08 Nov 2022
Cited by 3 | Viewed by 2357
Abstract
The COVID-19 pandemic challenged the sustainability of higher education as millions of students were forced out of school, shifting to online learning instead of in-class education. In the Erasmus+ project, Virtual Presence in Higher Education Hybrid Learning Delivery (VIE), we were concerned with [...] Read more.
The COVID-19 pandemic challenged the sustainability of higher education as millions of students were forced out of school, shifting to online learning instead of in-class education. In the Erasmus+ project, Virtual Presence in Higher Education Hybrid Learning Delivery (VIE), we were concerned with the level of readiness and the ability of higher-education students and teachers to face this changing situation. This paper reports the results of a survey which assessed the experiences that students and teachers had during the pandemic and, in particular, the development of soft skills through active learning methodologies. The project results show that there are still some unmet needs, but existing digital technologies, tools, and platforms already provide valuable solutions both for students and teachers that ensure a continuation of high-quality learning experiences. Full article
(This article belongs to the Special Issue Interactive Technology and Smart Education)
Show Figures

Figure 1

35 pages, 9363 KiB  
Article
Composite Spatial Manipulation Framework for Redirected Walking
by Nassr Alsaeedi and Albert Zündorf
Computers 2022, 11(11), 156; https://doi.org/10.3390/computers11110156 - 31 Oct 2022
Viewed by 1533
Abstract
In this study, we present a composite spatial manipulation framework for the redirected walking technique. The proposed framework focuses on utilizing two different approaches simultaneously to manipulate the user’s position and orientation in the physical space, aiming to substantially improve their redirection in [...] Read more.
In this study, we present a composite spatial manipulation framework for the redirected walking technique. The proposed framework focuses on utilizing two different approaches simultaneously to manipulate the user’s position and orientation in the physical space, aiming to substantially improve their redirection in a confined physical space and reduce the special requirements for the RDW technique. Each approach utilizes different perceptual processes. The first is a discrete spatial manipulation approach that introduces translation and/or rotation gains to the user’s virtual perspective in the immersive virtual environment (IVE) during temporal events such as eyeblinks. The second approach is the continuous spatial manipulation approach, which continuously introduces (with each frame) translation and/or rotation gains below the user’s perception threshold to their virtual perspective in the IVE. Two simulation experiments were conducted to investigate the feasibility of adopting the composite spatial manipulation framework for RDW without considering the user’s walking behavior or the impact of the proposed approach on user performance in the immersive virtual environment. In the second simulation experiment we aimed to investigate the performance of the proposed approach while considering the user’s walking behavior and performance in the IVE. Finally, a user experiment was conducted to validate the proposed framework and its impact on the user’s performance in the IVE. The findings revealed a significant improvement in the redirection performance of the proposed controller when it was compared to the classical RDW controller. Additionally, there was significant improvement in the user’s performance when the composite RDW controller was utilized. Full article
Show Figures

Figure 1

22 pages, 5962 KiB  
Article
Intelligent Robotic Welding Based on a Computer Vision Technology Approach
by Nazar Kais AL-Karkhi, Wisam T. Abbood, Enas A. Khalid, Adnan Naji Jameel Al-Tamimi, Ali A. Kudhair and Oday Ibraheem Abdullah
Computers 2022, 11(11), 155; https://doi.org/10.3390/computers11110155 - 29 Oct 2022
Cited by 3 | Viewed by 3347
Abstract
Robots have become an essential part of modern industries in welding departments to increase the accuracy and rate of production. The intelligent detection of welding line edges to start the weld in a proper position is very important. This work introduces a new [...] Read more.
Robots have become an essential part of modern industries in welding departments to increase the accuracy and rate of production. The intelligent detection of welding line edges to start the weld in a proper position is very important. This work introduces a new approach using image processing to detect welding lines by tracking the edges of plates according to the required speed by three degrees of a freedom robotic arm. The two different algorithms achieved in the developed approach are the edge detection and top-hat transformation. An adaptive neuro-fuzzy inference system ANFIS was used to choose the best forward and inverse kinematics of the robot. MIG welding at the end-effector was applied as a tool in this system, and the weld was completed according to the required working conditions and performance. The parts of the system work with compatible and consistent performances, with acceptable accuracy for tracking the line of the welding path. Full article
(This article belongs to the Special Issue Multimodal Pattern Recognition of Social Signals in HCI)
Show Figures

Figure 1

16 pages, 519 KiB  
Review
A Critical Review on the 3D Cephalometric Analysis Using Machine Learning
by Shtwai Alsubai
Computers 2022, 11(11), 154; https://doi.org/10.3390/computers11110154 - 28 Oct 2022
Cited by 6 | Viewed by 2843
Abstract
Machine learning applications have momentously enhanced the quality of human life. The past few decades have seen the progression and application of machine learning in diverse medical fields. With the rapid advancement in technology, machine learning has secured prominence in the prediction and [...] Read more.
Machine learning applications have momentously enhanced the quality of human life. The past few decades have seen the progression and application of machine learning in diverse medical fields. With the rapid advancement in technology, machine learning has secured prominence in the prediction and classification of diseases through medical images. This technological expansion in medical imaging has enabled the automated recognition of anatomical landmarks in radiographs. In this context, it is decisive that machine learning is capable of supporting clinical decision support systems with image processing and whose scope is found in the cephalometric analysis. Though the application of machine learning has been seen in dentistry and medicine, its progression in orthodontics has grown slowly despite promising outcomes. Therefore, the present study has performed a critical review of recent studies that have focused on the application of machine learning in 3D cephalometric analysis consisting of landmark identification, decision making, and diagnosis. The study also focused on the reliability and accuracy of existing methods that have employed machine learning in 3D cephalometry. In addition, the study also contributed by outlining the integration of deep learning approaches in cephalometric analysis. Finally, the applications and challenges faced are briefly explained in the review. The final section of the study comprises a critical analysis from which the most recent scope will be comprehended. Full article
(This article belongs to the Special Issue Multimodal Pattern Recognition of Social Signals in HCI)
Show Figures

Figure 1

27 pages, 1907 KiB  
Review
A Systemic Mapping Study of Business Intelligence Maturity Models for Higher Education Institutions
by Christopher Lee Stewart and M. Ali Akber Dewan
Computers 2022, 11(11), 153; https://doi.org/10.3390/computers11110153 - 28 Oct 2022
Cited by 1 | Viewed by 2612
Abstract
Higher education institutions (HEIs) are investing in business intelligence (BI) to meet the increasing demand for information stemming from their operations. Information technology (IT) managers in higher education may turn to BI maturity models to evaluate the current state of HEIs’ BI operation [...] Read more.
Higher education institutions (HEIs) are investing in business intelligence (BI) to meet the increasing demand for information stemming from their operations. Information technology (IT) managers in higher education may turn to BI maturity models to evaluate the current state of HEIs’ BI operation capabilities and evaluate the readiness for future improvements. However, generic BI maturity models do not have domain-specific attributes that ensure a high degree of compatibility with HEIs. This study’s objective is to survey maturity models that could be used in HEIs and identify those used for BI to perform an analysis of their qualities and identify future avenues for research into HEI-specific BI maturity models. A systemic mapping was undertaken via both a keyword and snowball search of five indexing services, 6037 articles were processed using inclusion and exclusion criteria resulting in the identification of forty-one academic works regarding maturity model uses which were mapped to ten categories. The mapping reveals an increasing number of publications featuring maturity models for HEI, particularly since 2018, focused on e-learning and ICT. A single instance of a BI maturity model for HEI emerged in 2022 within the European HEI context. The HE-BIA MM has more dimensions than most other models identified, yet only a single co-occurrence of dimensions was identified in name only. We conclude that BI maturity models for HEI are emerging as a field of research with future directions for research including exploring co-occurrence of dimensions with existing maturity models, performing case studies, and validation of HE-BIA MM outside the European HEI context. Full article
(This article belongs to the Special Issue Feature Papers in Computers 2023)
Show Figures

Figure 1

Previous Issue
Next Issue
Back to TopTop