Previous Issue
Volume 6, March
 
 

Mach. Learn. Knowl. Extr., Volume 6, Issue 2 (June 2024) – 9 articles

  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Reader to open them.
Order results
Result details
Section
Select all
Export citation of selected articles as:
19 pages, 2051 KiB  
Perspective
Concept Paper for a Digital Expert: Systematic Derivation of (Causal) Bayesian Networks Based on Ontologies for Knowledge-Based Production Steps
by Manja Mai-Ly Pfaff-Kastner, Ken Wenzel and Steffen Ihlenfeldt
Mach. Learn. Knowl. Extr. 2024, 6(2), 898-916; https://doi.org/10.3390/make6020042 (registering DOI) - 25 Apr 2024
Viewed by 151
Abstract
Despite increasing digitalization and automation, complex production processes often require human judgment/decision-making adaptability. Humans can abstract and transfer knowledge to new situations. People in production are an irreplaceable resource. This paper presents a new concept for digitizing human expertise and their ability to [...] Read more.
Despite increasing digitalization and automation, complex production processes often require human judgment/decision-making adaptability. Humans can abstract and transfer knowledge to new situations. People in production are an irreplaceable resource. This paper presents a new concept for digitizing human expertise and their ability to make knowledge-based decisions in the production area based on ontologies and causal Bayesian networks for further research. Dedicated approaches for the ontology-based creation of Bayesian networks exist in the literature. Therefore, we first comprehensively analyze previous studies and summarize the approaches. We then add the causal perspective, which has often not been an explicit subject of consideration. We see a research gap in the systematic and structured approach to ontology-based generation of causal graphs (CGs). At the current state of knowledge, the semantic understanding of a domain formalized in an ontology can contribute to developing a generic approach to derive a CG. The ontology functions as a knowledge base by formally representing knowledge and experience. Causal inference calculations can mathematically imitate the human decision-making process under uncertainty. Therefore, a systematic ontology-based approach to building a CG can allow digitizing the human ability to make decisions based on experience and knowledge. Full article
(This article belongs to the Section Network)
Show Figures

Figure 1

21 pages, 3643 KiB  
Article
Enhancing Legal Sentiment Analysis: A Convolutional Neural Network–Long Short-Term Memory Document-Level Model
by Bolanle Abimbola, Enrique de La Cal Marin and Qing Tan
Mach. Learn. Knowl. Extr. 2024, 6(2), 877-897; https://doi.org/10.3390/make6020041 (registering DOI) - 19 Apr 2024
Viewed by 545
Abstract
This research investigates the application of deep learning in sentiment analysis of Canadian maritime case law. It offers a framework for improving maritime law and legal analytic policy-making procedures. The automation of legal document extraction takes center stage, underscoring the vital role sentiment [...] Read more.
This research investigates the application of deep learning in sentiment analysis of Canadian maritime case law. It offers a framework for improving maritime law and legal analytic policy-making procedures. The automation of legal document extraction takes center stage, underscoring the vital role sentiment analysis plays at the document level. Therefore, this study introduces a novel strategy for sentiment analysis in Canadian maritime case law, combining sentiment case law approaches with state-of-the-art deep learning techniques. The overarching goal is to systematically unearth hidden biases within case law and investigate their impact on legal outcomes. Employing Convolutional Neural Network (CNN)- and long short-term memory (LSTM)-based models, this research achieves a remarkable accuracy of 98.05% for categorizing instances. In contrast, conventional machine learning techniques such as support vector machine (SVM) yield an accuracy rate of 52.57%, naïve Bayes at 57.44%, and logistic regression at 61.86%. The superior accuracy of the CNN and LSTM model combination underscores its usefulness in legal sentiment analysis, offering promising future applications in diverse fields like legal analytics and policy design. These findings mark a significant choice for AI-powered legal tools, presenting more sophisticated and sentiment-aware options for the legal profession. Full article
(This article belongs to the Section Learning)
Show Figures

Figure 1

35 pages, 2155 KiB  
Article
A Comprehensive Survey on Deep Learning Methods in Human Activity Recognition
by Michail Kaseris, Ioannis Kostavelis and Sotiris Malassiotis
Mach. Learn. Knowl. Extr. 2024, 6(2), 842-876; https://doi.org/10.3390/make6020040 - 18 Apr 2024
Viewed by 539
Abstract
Human activity recognition (HAR) remains an essential field of research with increasing real-world applications ranging from healthcare to industrial environments. As the volume of publications in this domain continues to grow, staying abreast of the most pertinent and innovative methodologies can be challenging. [...] Read more.
Human activity recognition (HAR) remains an essential field of research with increasing real-world applications ranging from healthcare to industrial environments. As the volume of publications in this domain continues to grow, staying abreast of the most pertinent and innovative methodologies can be challenging. This survey provides a comprehensive overview of the state-of-the-art methods employed in HAR, embracing both classical machine learning techniques and their recent advancements. We investigate a plethora of approaches that leverage diverse input modalities including, but not limited to, accelerometer data, video sequences, and audio signals. Recognizing the challenge of navigating the vast and ever-growing HAR literature, we introduce a novel methodology that employs large language models to efficiently filter and pinpoint relevant academic papers. This not only reduces manual effort but also ensures the inclusion of the most influential works. We also provide a taxonomy of the examined literature to enable scholars to have rapid and organized access when studying HAR approaches. Through this survey, we aim to inform researchers and practitioners with a holistic understanding of the current HAR landscape, its evolution, and the promising avenues for future exploration. Full article
(This article belongs to the Section Learning)
Show Figures

Figure 1

15 pages, 4325 KiB  
Article
Impact of Nature of Medical Data on Machine and Deep Learning for Imbalanced Datasets: Clinical Validity of SMOTE Is Questionable
by Seifollah Gholampour
Mach. Learn. Knowl. Extr. 2024, 6(2), 827-841; https://doi.org/10.3390/make6020039 - 15 Apr 2024
Viewed by 402
Abstract
Dataset imbalances pose a significant challenge to predictive modeling in both medical and financial domains, where conventional strategies, including resampling and algorithmic modifications, often fail to adequately address minority class underrepresentation. This study theoretically and practically investigates how the inherent nature of medical [...] Read more.
Dataset imbalances pose a significant challenge to predictive modeling in both medical and financial domains, where conventional strategies, including resampling and algorithmic modifications, often fail to adequately address minority class underrepresentation. This study theoretically and practically investigates how the inherent nature of medical data affects the classification of minority classes. It employs ten machine and deep learning classifiers, ranging from ensemble learners to cost-sensitive algorithms, across comparably sized medical and financial datasets. Despite these efforts, none of the classifiers achieved effective classification of the minority class in the medical dataset, with sensitivity below 5.0% and area under the curve (AUC) below 57.0%. In contrast, the similar classifiers applied to the financial dataset demonstrated strong discriminative power, with overall accuracy exceeding 95.0%, sensitivity over 73.0%, and AUC above 96.0%. This disparity underscores the unpredictable variability inherent in the nature of medical data, as exemplified by the dispersed and homogeneous distribution of the minority class among other classes in principal component analysis (PCA) graphs. The application of the synthetic minority oversampling technique (SMOTE) introduced 62 synthetic patients based on merely 20 original cases, casting doubt on its clinical validity and the representation of real-world patient variability. Furthermore, post-SMOTE feature importance analysis, utilizing SHapley Additive exPlanations (SHAP) and tree-based methods, contradicted established cerebral stroke parameters, further questioning the clinical coherence of synthetic dataset augmentation. These findings call into question the clinical validity of the SMOTE technique and underscore the urgent need for advanced modeling techniques and algorithmic innovations for predicting minority-class outcomes in medical datasets without depending on resampling strategies. This approach underscores the importance of developing methods that are not only theoretically robust but also clinically relevant and applicable to real-world clinical scenarios. Consequently, this study underscores the importance of future research efforts to bridge the gap between theoretical advancements and the practical, clinical applications of models like SMOTE in healthcare. Full article
(This article belongs to the Topic Communications Challenges in Health and Well-Being)
Show Figures

Figure 1

27 pages, 1266 KiB  
Article
A Meta Algorithm for Interpretable Ensemble Learning: The League of Experts
by Richard Vogel, Tobias Schlosser, Robert Manthey, Marc Ritter, Matthias Vodel, Maximilian Eibl and Kristan Alexander Schneider
Mach. Learn. Knowl. Extr. 2024, 6(2), 800-826; https://doi.org/10.3390/make6020038 - 09 Apr 2024
Viewed by 523
Abstract
Background. The importance of explainable artificial intelligence and machine learning (XAI/XML) is increasingly being recognized, aiming to understand how information contributes to decisions, the method’s bias, or sensitivity to data pathologies. Efforts are often directed to post hoc explanations [...] Read more.
Background. The importance of explainable artificial intelligence and machine learning (XAI/XML) is increasingly being recognized, aiming to understand how information contributes to decisions, the method’s bias, or sensitivity to data pathologies. Efforts are often directed to post hoc explanations of black box models. These approaches add additional sources for errors without resolving their shortcomings. Less effort is directed into the design of intrinsically interpretable approaches. Methods. We introduce an intrinsically interpretable methodology motivated by ensemble learning: the League of Experts (LoE) model. We establish the theoretical framework first and then deduce a modular meta algorithm. In our description, we focus primarily on classification problems. However, LoE applies equally to regression problems. Specific to classification problems, we employ classical decision trees as classifier ensembles as a particular instance. This choice facilitates the derivation of human-understandable decision rules for the underlying classification problem, which results in a derived rule learning system denoted as RuleLoE. Results. In addition to 12 KEEL classification datasets, we employ two standard datasets from particularly relevant domains—medicine and finance—to illustrate the LoE algorithm. The performance of LoE with respect to its accuracy and rule coverage is comparable to common state-of-the-art classification methods. Moreover, LoE delivers a clearly understandable set of decision rules with adjustable complexity, describing the classification problem. Conclusions. LoE is a reliable method for classification and regression problems with an accuracy that seems to be appropriate for situations in which underlying causalities are in the center of interest rather than just accurate predictions or classifications. Full article
(This article belongs to the Special Issue Advances in Explainable Artificial Intelligence (XAI): 2nd Edition)
Show Figures

Figure 1

11 pages, 1606 KiB  
Article
Effective Data Reduction Using Discriminative Feature Selection Based on Principal Component Analysis
by Faith Nwokoma, Justin Foreman and Cajetan M. Akujuobi
Mach. Learn. Knowl. Extr. 2024, 6(2), 789-799; https://doi.org/10.3390/make6020037 - 03 Apr 2024
Viewed by 608
Abstract
Effective data reduction must retain the greatest possible amount of informative content of the data under examination. Feature selection is the default for dimensionality reduction, as the relevant features of a dataset are usually retained through this method. In this study, we used [...] Read more.
Effective data reduction must retain the greatest possible amount of informative content of the data under examination. Feature selection is the default for dimensionality reduction, as the relevant features of a dataset are usually retained through this method. In this study, we used unsupervised learning to discover the top-k discriminative features present in the large multivariate IoT dataset used. We used the statistics of principal component analysis to filter the relevant features based on the ranks of the features along the principal directions while also considering the coefficients of the components. The selected number of principal components was used to decide the number of features to be selected in the SVD process. A number of experiments were conducted using different benchmark datasets, and the effectiveness of the proposed method was evaluated based on the reconstruction error. The potency of the results was verified by subjecting the algorithm to a large IoT dataset, and we compared the performance based on accuracy and reconstruction error to the results of the benchmark datasets. The performance evaluation showed consistency with the results obtained with the benchmark datasets, which were of high accuracy and low reconstruction error. Full article
(This article belongs to the Topic Big Data Intelligence: Methodologies and Applications)
Show Figures

Figure 1

19 pages, 2757 KiB  
Article
Birthweight Range Prediction and Classification: A Machine Learning-Based Sustainable Approach
by Dina A. Alabbad, Shahad Y. Ajibi, Raghad B. Alotaibi, Noura K. Alsqer, Rahaf A. Alqahtani, Noor M. Felemban, Atta Rahman, Sumayh S. Aljameel, Mohammed Imran Basheer Ahmed and Mustafa M. Youldash
Mach. Learn. Knowl. Extr. 2024, 6(2), 770-788; https://doi.org/10.3390/make6020036 - 01 Apr 2024
Viewed by 735
Abstract
An accurate prediction of fetal birth weight is crucial in ensuring safe delivery without health complications for the mother and baby. The uncertainty surrounding the fetus’s birth situation, including its weight range, can lead to significant risks for both mother and baby. As [...] Read more.
An accurate prediction of fetal birth weight is crucial in ensuring safe delivery without health complications for the mother and baby. The uncertainty surrounding the fetus’s birth situation, including its weight range, can lead to significant risks for both mother and baby. As there is a standard birth weight range, if the fetus exceeds or falls below this range, it can result in considerable health problems. Although ultrasound imaging is commonly used to predict fetal weight, it does not always provide accurate readings, which may lead to unnecessary decisions such as early delivery and cesarian section. Besides that, no supporting system is available to predict the weight range in Saudi Arabia. Therefore, leveraging the available technologies to build a system that can serve as a second opinion for doctors and health professionals is essential. Machine learning (ML) offers significant advantages to numerous fields and can address various issues. As such, this study aims to utilize ML techniques to build a predictive model to predict the birthweight range of infants into low, normal, or high. For this purpose, two datasets were used: one from King Fahd University Hospital (KFHU), Saudi Arabia, and another publicly available dataset from the Institute of Electrical and Electronics Engineers (IEEE) data port. KFUH’s best result was obtained with the Extra Trees model, achieving an accuracy, precision, recall, and F1-score of 98%, with a specificity of 99%. On the other hand, using the Random Forest model, the IEEE dataset attained an accuracy, precision, recall, and F1-score of 96%, respectively, with a specificity of 98%. These results suggest that the proposed ML system can provide reliable predictions, which could be of significant value for doctors and health professionals in Saudi Arabia. Full article
(This article belongs to the Special Issue Sustainable Applications for Machine Learning)
Show Figures

Figure 1

19 pages, 5712 KiB  
Article
Soil Sampling Map Optimization with a Dual Deep Learning Framework
by Tan-Hanh Pham and Kim-Doang Nguyen
Mach. Learn. Knowl. Extr. 2024, 6(2), 751-769; https://doi.org/10.3390/make6020035 - 29 Mar 2024
Viewed by 555
Abstract
Soil sampling constitutes a fundamental process in agriculture, enabling precise soil analysis and optimal fertilization. The automated selection of accurate soil sampling locations representative of a given field is critical for informed soil treatment decisions. This study leverages recent advancements in deep learning [...] Read more.
Soil sampling constitutes a fundamental process in agriculture, enabling precise soil analysis and optimal fertilization. The automated selection of accurate soil sampling locations representative of a given field is critical for informed soil treatment decisions. This study leverages recent advancements in deep learning to develop efficient tools for generating soil sampling maps. We proposed two models, namely UDL and UFN, which are the results of innovations in machine learning architecture design and integration. The models are meticulously trained on a comprehensive soil sampling dataset collected from local farms in South Dakota. The data include five key attributes: aspect, flow accumulation, slope, normalized difference vegetation index, and yield. The inputs to the models consist of multispectral images, and the ground truths are highly unbalanced binary images. To address this challenge, we innovate a feature extraction technique to find patterns and characteristics from the data before using these refined features for further processing and generating soil sampling maps. Our approach is centered around building a refiner that extracts fine features and a selector that utilizes these features to produce prediction maps containing the selected optimal soil sampling locations. Our experimental results demonstrate the superiority of our tools compared to existing methods. During testing, our proposed models exhibit outstanding performance, achieving the highest mean Intersection over Union of 60.82% and mean Dice Coefficient of 73.74%. The research not only introduces an innovative tool for soil sampling but also lays the foundation for the integration of traditional and modern soil sampling methods. This work provides a promising solution for precision agriculture and soil management. Full article
(This article belongs to the Topic Applications in Image Analysis and Pattern Recognition)
Show Figures

Figure 1

15 pages, 911 KiB  
Article
A New and Lightweight R-Peak Detector Using the TEDA Evolving Algorithm
by Lucileide M. D. da Silva, Sérgio N. Silva, Luísa C. de Souza, Karolayne S. de Azevedo, Luiz Affonso Guedes and Marcelo A. C. Fernandes
Mach. Learn. Knowl. Extr. 2024, 6(2), 736-750; https://doi.org/10.3390/make6020034 - 29 Mar 2024
Viewed by 724
Abstract
The literature on ECG delineation algorithms has seen significant growth in recent decades. However, several challenges still need to be addressed. This work aims to propose a lightweight R-peak-detection algorithm that does not require pre-setting and performs classification on a sample-by-sample basis. The [...] Read more.
The literature on ECG delineation algorithms has seen significant growth in recent decades. However, several challenges still need to be addressed. This work aims to propose a lightweight R-peak-detection algorithm that does not require pre-setting and performs classification on a sample-by-sample basis. The novelty of the proposed approach lies in the utilization of the typicality eccentricity detection anomaly (TEDA) algorithm for R-peak detection. The proposed method for R-peak detection consists of three phases. Firstly, the ECG signal is preprocessed by calculating the signal’s slope and applying filtering techniques. Next, the preprocessed signal is inputted into the TEDA algorithm for R-peak estimation. Finally, in the third and last step, the R-peak identification is carried out. To evaluate the effectiveness of the proposed technique, experiments were conducted on the MIT-BIH arrhythmia database (MIT-AD) for R-peak detection and validation. The results of the study demonstrated that the proposed evolutive algorithm achieved a sensitivity (Se in %), positive predictivity (+P in %), and accuracy (ACC in %) of 95.45%, 99.61%, and 95.09%, respectively, with a tolerance (TOL) of 100 milliseconds. One key advantage of the proposed technique is its low computational complexity, as it is based on a statistical framework calculated recursively. It employs the concepts of typicity and eccentricity to determine whether a given sample is normal or abnormal within the dataset. Unlike most traditional methods, it does not require signal buffering or windowing. Furthermore, the proposed technique employs simple decision rules rather than heuristic approaches, further contributing to its computational efficiency. Full article
(This article belongs to the Topic Bioinformatics and Intelligent Information Processing)
Show Figures

Figure 1

Previous Issue
Back to TopTop