Next Issue
Volume 5, June
Previous Issue
Volume 4, December
 
 

Mach. Learn. Knowl. Extr., Volume 5, Issue 1 (March 2023) – 20 articles

Cover Story (view full-size image): Explainable AI (XAI) aims to make black-box models more transparent for humans. Fortunately, plenty of XAI methods have been introduced to tackle the explainability problem from different perspectives. However, due to the vast search space, it is challenging for ML practitioners to start with the development of XAI software and to select the most suitable XAI methods. To address this challenge, XAIR is introduced, which is a systematic meta-review of the most promising XAI methods and tools aligned to the five steps of the software development process, including requirement analysis, design, implementation, evaluation, and deployment. This mapping aims to clarify the steps involved in developing XAI software and to encourage the integration of explainability in AI applications. View this paper
  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Reader to open them.
Order results
Result details
Section
Select all
Export citation of selected articles as:
16 pages, 1978 KiB  
Article
Human Action Recognition-Based IoT Services for Emergency Response Management
by Talal H. Noor
Mach. Learn. Knowl. Extr. 2023, 5(1), 330-345; https://doi.org/10.3390/make5010020 - 13 Mar 2023
Cited by 1 | Viewed by 2064
Abstract
Emergency incidents can appear anytime and any place, which makes it very challenging for emergency medical services practitioners to predict the location and the time of such emergencies. The dynamic nature of the appearance of emergency incidents can cause delays in emergency medical [...] Read more.
Emergency incidents can appear anytime and any place, which makes it very challenging for emergency medical services practitioners to predict the location and the time of such emergencies. The dynamic nature of the appearance of emergency incidents can cause delays in emergency medical services, which can sometimes lead to vital injury complications or even death, in some cases. The delay of emergency medical services may occur as a result of a call that was made too late or because no one was present to make the call. With the emergence of smart cities and promising technologies, such as the Internet of Things (IoT) and computer vision techniques, such issues can be tackled. This article proposes a human action recognition-based IoT services architecture for emergency response management. In particular, the architecture exploits IoT devices (e.g., surveillance cameras) that are distributed in public areas to detect emergency incidents, make a request for the nearest emergency medical services, and send emergency location information. Moreover, this article proposes an emergency incidents detection model, based on human action recognition and object tracking, using image processing and classifying the collected images, based on action modeling. The primary notion of the proposed model is to classify human activity, whether it is an emergency incident or other daily activities, using a Convolutional Neural Network (CNN) and Support Vector Machine (SVM). To demonstrate the feasibility of the proposed emergency detection model, several experiments were conducted using the UR fall detection dataset, which consists of emergency and other daily activities footage. The results of the conducted experiments were promising, with the proposed model scoring 0.99, 0.97, 0.97, and 0.98 in terms of sensitivity, specificity, precision, and accuracy, respectively. Full article
(This article belongs to the Special Issue Deep Learning in Image Analysis and Pattern Recognition)
Show Figures

Figure 1

26 pages, 3025 KiB  
Article
A Survey on GAN Techniques for Data Augmentation to Address the Imbalanced Data Issues in Credit Card Fraud Detection
by Emilija Strelcenia and Simant Prakoonwit
Mach. Learn. Knowl. Extr. 2023, 5(1), 304-329; https://doi.org/10.3390/make5010019 - 11 Mar 2023
Cited by 15 | Viewed by 6600
Abstract
Data augmentation is an important procedure in deep learning. GAN-based data augmentation can be utilized in many domains. For instance, in the credit card fraud domain, the imbalanced dataset problem is a major one as the number of credit card fraud cases is [...] Read more.
Data augmentation is an important procedure in deep learning. GAN-based data augmentation can be utilized in many domains. For instance, in the credit card fraud domain, the imbalanced dataset problem is a major one as the number of credit card fraud cases is in the minority compared to legal payments. On the other hand, generative techniques are considered effective ways to rebalance the imbalanced class issue, as these techniques balance both minority and majority classes before the training. In a more recent period, Generative Adversarial Networks (GANs) are considered one of the most popular data generative techniques as they are used in big data settings. This research aims to present a survey on data augmentation using various GAN variants in the credit card fraud detection domain. In this survey, we offer a comprehensive summary of several peer-reviewed research papers on GAN synthetic generation techniques for fraud detection in the financial sector. In addition, this survey includes various solutions proposed by different researchers to balance imbalanced classes. In the end, this work concludes by pointing out the limitations of the most recent research articles and future research issues, and proposes solutions to address these problems. Full article
(This article belongs to the Special Issue Privacy and Security in Machine Learning)
Show Figures

Figure 1

17 pages, 912 KiB  
Article
Skew Class-Balanced Re-Weighting for Unbiased Scene Graph Generation
by Haeyong Kang and Chang D. Yoo
Mach. Learn. Knowl. Extr. 2023, 5(1), 287-303; https://doi.org/10.3390/make5010018 - 10 Mar 2023
Cited by 2 | Viewed by 2062
Abstract
An unbiased scene graph generation (SGG) algorithm referred to as Skew Class-Balanced Re-Weighting (SCR) is proposed for considering the unbiased predicate prediction caused by the long-tailed distribution. The prior works focus mainly on alleviating the deteriorating performances of the minority predicate predictions, showing [...] Read more.
An unbiased scene graph generation (SGG) algorithm referred to as Skew Class-Balanced Re-Weighting (SCR) is proposed for considering the unbiased predicate prediction caused by the long-tailed distribution. The prior works focus mainly on alleviating the deteriorating performances of the minority predicate predictions, showing drastic dropping recall scores, i.e., losing the majority predicate performances. It has not yet correctly analyzed the trade-off between majority and minority predicate performances in the limited SGG datasets. In this paper, to alleviate the issue, the Skew Class-Balanced Re-Weighting (SCR) loss function is considered for the unbiased SGG models. Leveraged by the skewness of biased predicate predictions, the SCR estimates the target predicate weight coefficient and then re-weights more to the biased predicates for better trading-off between the majority predicates and the minority ones. Extensive experiments conducted on the standard Visual Genome dataset and Open Image V4 and V6 show the performances and generality of the SCR with the traditional SGG models. Full article
(This article belongs to the Topic Applications in Image Analysis and Pattern Recognition)
Show Figures

Figure 1

18 pages, 3481 KiB  
Article
Painting the Black Box White: Experimental Findings from Applying XAI to an ECG Reading Setting
by Federico Cabitza, Andrea Campagner, Chiara Natali, Enea Parimbelli, Luca Ronzio and Matteo Cameli
Mach. Learn. Knowl. Extr. 2023, 5(1), 269-286; https://doi.org/10.3390/make5010017 - 08 Mar 2023
Cited by 4 | Viewed by 2636
Abstract
The emergence of black-box, subsymbolic, and statistical AI systems has motivated a rapid increase in the interest regarding explainable AI (XAI), which encompasses both inherently explainable techniques, as well as approaches to make black-box AI systems explainable to human decision makers. Rather than [...] Read more.
The emergence of black-box, subsymbolic, and statistical AI systems has motivated a rapid increase in the interest regarding explainable AI (XAI), which encompasses both inherently explainable techniques, as well as approaches to make black-box AI systems explainable to human decision makers. Rather than always making black boxes transparent, these approaches are at risk of painting the black boxes white, thus failing to provide a level of transparency that would increase the system’s usability and comprehensibility, or even at risk of generating new errors (i.e., white-box paradox). To address these usability-related issues, in this work we focus on the cognitive dimension of users’ perception of explanations and XAI systems. We investigated these perceptions in light of their relationship with users’ characteristics (e.g., expertise) through a questionnaire-based user study involved 44 cardiology residents and specialists in an AI-supported ECG reading task. Our results point to the relevance and correlation of the dimensions of trust, perceived quality of explanations, and tendency to defer the decision process to automation (i.e., technology dominance). This contribution calls for the evaluation of AI-based support systems from a human–AI interaction-oriented perspective, laying the ground for further investigation of XAI and its effects on decision making and user experience. Full article
(This article belongs to the Special Issue Advances in Explainable Artificial Intelligence (XAI))
Show Figures

Figure 1

17 pages, 4675 KiB  
Article
A Novel Pipeline Age Evaluation: Considering Overall Condition Index and Neural Network Based on Measured Data
by Hassan Noroznia, Majid Gandomkar, Javad Nikoukar, Ali Aranizadeh and Mirpouya Mirmozaffari
Mach. Learn. Knowl. Extr. 2023, 5(1), 252-268; https://doi.org/10.3390/make5010016 - 20 Feb 2023
Cited by 7 | Viewed by 3806
Abstract
Today, the chemical corrosion of metals is one of the main problems of large productions, especially in the oil and gas industries. Due to massive downtime connected to corrosion failures, pipeline corrosion is a central issue in many oil and gas industries. Therefore, [...] Read more.
Today, the chemical corrosion of metals is one of the main problems of large productions, especially in the oil and gas industries. Due to massive downtime connected to corrosion failures, pipeline corrosion is a central issue in many oil and gas industries. Therefore, the determination of the corrosion progress of oil and gas pipelines is crucial for monitoring the reliability and alleviation of failures that can positively impact health, safety, and the environment. Gas transmission and distribution pipes and other structures buried (or immersed) in an electrolyte, by the existing conditions and due to the metallurgical structure, are corroded. After some time, this disrupts an active system and process by causing damage. The worst corrosion for metals implanted in the soil is in areas where electrical currents are lost. Therefore, cathodic protection (CP) is the most effective method to prevent the corrosion of structures buried in the soil. Our aim in this paper is first to investigate the effect of stray currents on failure rate using the condition index, and then to estimate the remaining useful life of CP gas pipelines using an artificial neural network (ANN). Predicting future values using previous data based on the time series feature is also possible. Therefore, this paper first uses the general equipment condition monitoring method to detect failures. The time series model of data is then measured and operated by neural networks. Finally, the amount of failure over time is determined. Full article
(This article belongs to the Section Network)
Show Figures

Figure 1

15 pages, 3043 KiB  
Article
Can Principal Component Analysis Be Used to Explore the Relationship of Rowing Kinematics and Force Production in Elite Rowers during a Step Test? A Pilot Study
by Matt Jensen, Trent Stellingwerff, Courtney Pollock, James Wakeling and Marc Klimstra
Mach. Learn. Knowl. Extr. 2023, 5(1), 237-251; https://doi.org/10.3390/make5010015 - 17 Feb 2023
Cited by 1 | Viewed by 1903
Abstract
Investigating the relationship between the movement patterns of multiple limb segments during the rowing stroke on the resulting force production in elite rowers can provide foundational insight into optimal technique. It can also highlight potential mechanisms of injury and performance improvement. The purpose [...] Read more.
Investigating the relationship between the movement patterns of multiple limb segments during the rowing stroke on the resulting force production in elite rowers can provide foundational insight into optimal technique. It can also highlight potential mechanisms of injury and performance improvement. The purpose of this study was to conduct a kinematic analysis of the rowing stroke together with force production during a step test in elite national-team heavyweight men to evaluate the fundamental patterns that contribute to expert performance. Twelve elite heavyweight male rowers performed a step test on a row-perfect sliding ergometer [5 × 1 min with 1 min rest at set stroke rates (20, 24, 28, 32, 36)]. Joint angle displacement and velocity of the hip, knee and elbow were measured with electrogoniometers, and force was measured with a tension/compression force transducer in line with the handle. To explore interactions between kinematic patterns and stroke performance variables, joint angular velocities of the hip, knee and elbow were entered into principal component analysis (PCA) and separate ANCOVAs were run for each performance variable (peak force, impulse, split time) with dependent variables, and the kinematic loading scores (Kpc,ls) as covariates with athlete/stroke rate as fixed factors. The results suggested that rowers’ kinematic patterns respond differently across varying stroke rates. The first seven PCs accounted for 79.5% (PC1 [26.4%], PC2 [14.6%], PC3 [11.3%], PC4 [8.4%], PC5 [7.5%], PC6 [6.5%], PC7 [4.8%]) of the variances in the signal. The PCs contributing significantly (p ≤ 0.05) to performance metrics based on PC loading scores from an ANCOVA were (PC1, PC2, PC6) for split time, (PC3, PC4, PC5, PC6) for impulse, and (PC1, PC6, PC7) for peak force. The significant PCs for each performance measure were used to reconstruct the kinematic patterns for split time, impulse and peak force separately. Overall, PCA was able to differentiate between rowers and stroke rates, and revealed features of the rowing-stroke technique correlated with measures of performance that may highlight meaningful technique-optimization strategies. PCA could be used to provide insight into differences in kinematic strategies that could result in suboptimal performance, potential asymmetries or to determine how well a desired technique change has been accomplished by group and/or individual athletes. Full article
(This article belongs to the Section Data)
Show Figures

Figure 1

38 pages, 13500 KiB  
Article
InvMap and Witness Simplicial Variational Auto-Encoders
by Aniss Aiman Medbouhi, Vladislav Polianskii, Anastasia Varava and Danica Kragic
Mach. Learn. Knowl. Extr. 2023, 5(1), 199-236; https://doi.org/10.3390/make5010014 - 05 Feb 2023
Viewed by 2026
Abstract
Variational auto-encoders (VAEs) are deep generative models used for unsupervised learning, however their standard version is not topology-aware in practice since the data topology may not be taken into consideration. In this paper, we propose two different approaches with the aim to preserve [...] Read more.
Variational auto-encoders (VAEs) are deep generative models used for unsupervised learning, however their standard version is not topology-aware in practice since the data topology may not be taken into consideration. In this paper, we propose two different approaches with the aim to preserve the topological structure between the input space and the latent representation of a VAE. Firstly, we introduce InvMap-VAE as a way to turn any dimensionality reduction technique, given an embedding it produces, into a generative model within a VAE framework providing an inverse mapping into original space. Secondly, we propose the Witness Simplicial VAE as an extension of the simplicial auto-encoder to the variational setup using a witness complex for computing the simplicial regularization, and we motivate this method theoretically using tools from algebraic topology. The Witness Simplicial VAE is independent of any dimensionality reduction technique and together with its extension, Isolandmarks Witness Simplicial VAE, preserves the persistent Betti numbers of a dataset better than a standard VAE. Full article
(This article belongs to the Topic Topology vs. Geometry in Data Analysis/Machine Learning)
Show Figures

Figure 1

24 pages, 745 KiB  
Systematic Review
Machine Learning and Prediction of Infectious Diseases: A Systematic Review
by Omar Enzo Santangelo, Vito Gentile, Stefano Pizzo, Domiziana Giordano and Fabrizio Cedrone
Mach. Learn. Knowl. Extr. 2023, 5(1), 175-198; https://doi.org/10.3390/make5010013 - 01 Feb 2023
Cited by 14 | Viewed by 13836
Abstract
The aim of the study is to show whether it is possible to predict infectious disease outbreaks early, by using machine learning. This study was carried out following the guidelines of the Cochrane Collaboration and the meta-analysis of observational studies in epidemiology and [...] Read more.
The aim of the study is to show whether it is possible to predict infectious disease outbreaks early, by using machine learning. This study was carried out following the guidelines of the Cochrane Collaboration and the meta-analysis of observational studies in epidemiology and the preferred reporting items for systematic reviews and meta-analyses. The suitable bibliography on PubMed/Medline and Scopus was searched by combining text, words, and titles on medical topics. At the end of the search, this systematic review contained 75 records. The studies analyzed in this systematic review demonstrate that it is possible to predict the incidence and trends of some infectious diseases; by combining several techniques and types of machine learning, it is possible to obtain accurate and plausible results. Full article
(This article belongs to the Special Issue Machine Learning for Biomedical Data Processing)
2 pages, 507 KiB  
Editorial
Special Issue “Selected Papers from CD-MAKE 2020 and ARES 2020”
by Edgar R. Weippl, Andreas Holzinger and Peter Kieseberg
Mach. Learn. Knowl. Extr. 2023, 5(1), 173-174; https://doi.org/10.3390/make5010012 - 20 Jan 2023
Cited by 1 | Viewed by 1442
Abstract
In the current era of rapid technological advancement, machine learning (ML) is quickly becoming a dominant force in the development of smart environments [...] Full article
(This article belongs to the Special Issue Selected Papers from CD-MAKE 2020 and ARES 2020)
2 pages, 173 KiB  
Editorial
Acknowledgment to the Reviewers of Machine Learning and Knowledge Extraction in 2022
by Machine Learning and Knowledge Extraction Editorial Office
Mach. Learn. Knowl. Extr. 2023, 5(1), 171-172; https://doi.org/10.3390/make5010011 - 18 Jan 2023
Viewed by 1062
Abstract
High-quality academic publishing is built on rigorous peer review [...] Full article
2 pages, 504 KiB  
Editorial
Explainable Machine Learning
by Jochen Garcke and Ribana Roscher
Mach. Learn. Knowl. Extr. 2023, 5(1), 169-170; https://doi.org/10.3390/make5010010 - 17 Jan 2023
Cited by 1 | Viewed by 1900
Abstract
Machine learning methods are widely used in commercial applications and in many scientific areas [...] Full article
(This article belongs to the Special Issue Explainable Machine Learning)
25 pages, 14005 KiB  
Article
On Deceiving Malware Classification with Section Injection
by Adeilson Antonio da Silva and Mauricio Pamplona Segundo
Mach. Learn. Knowl. Extr. 2023, 5(1), 144-168; https://doi.org/10.3390/make5010009 - 16 Jan 2023
Cited by 3 | Viewed by 2405
Abstract
We investigate how to modify executable files to deceive malware classification systems. This work’s main contribution is a methodology to inject bytes across a malware file randomly and use it both as an attack to decrease classification accuracy but also as a defensive [...] Read more.
We investigate how to modify executable files to deceive malware classification systems. This work’s main contribution is a methodology to inject bytes across a malware file randomly and use it both as an attack to decrease classification accuracy but also as a defensive method, augmenting the data available for training. It respects the operating system file format to make sure the malware will still execute after our injection and will not change its behavior. We reproduced five state-of-the-art malware classification approaches to evaluate our injection scheme: one based on Global Image Descriptor (GIST) + K-Nearest-Neighbors (KNN), three Convolutional Neural Network (CNN) variations and one Gated CNN. We performed our experiments on a public dataset with 9339 malware samples from 25 different families. Our results show that a mere increase of 7% in the malware size causes an accuracy drop between 25% and 40% for malware family classification. They show that an automatic malware classification system may not be as trustworthy as initially reported in the literature. We also evaluate using modified malware alongside the original ones to increase networks robustness against the mentioned attacks. The results show that a combination of reordering malware sections and injecting random data can improve the overall performance of the classification. All the code is publicly available. Full article
(This article belongs to the Topic Applications in Image Analysis and Pattern Recognition)
Show Figures

Figure 1

16 pages, 794 KiB  
Article
Detection of Temporal Shifts in Semantics Using Local Graph Clustering
by Neil Hwang, Shirshendu Chatterjee, Yanming Di and Sharmodeep Bhattacharyya
Mach. Learn. Knowl. Extr. 2023, 5(1), 128-143; https://doi.org/10.3390/make5010008 - 13 Jan 2023
Cited by 1 | Viewed by 2053
Abstract
Many changes in our digital corpus have been brought about by the interplay between rapid advances in digital communication and the current environment characterized by pandemics, political polarization, and social unrest. One such change is the pace with which new words enter the [...] Read more.
Many changes in our digital corpus have been brought about by the interplay between rapid advances in digital communication and the current environment characterized by pandemics, political polarization, and social unrest. One such change is the pace with which new words enter the mass vocabulary and the frequency at which meanings, perceptions, and interpretations of existing expressions change. The current state-of-the-art algorithms do not allow for an intuitive and rigorous detection of these changes in word meanings over time. We propose a dynamic graph-theoretic approach to inferring the semantics of words and phrases (“terms”) and detecting temporal shifts. Our approach represents each term as a stochastic time-evolving set of contextual words and is a count-based distributional semantic model in nature. We use local clustering techniques to assess the structural changes in a given word’s contextual words. We demonstrate the efficacy of our method by investigating the changes in the semantics of the phrase “Chinavirus”. We conclude that the term took on a much more pejorative meaning when the White House used the term in the second half of March 2020, although the effect appears to have been temporary. We make both the dataset and the code used to generate this paper’s results available. Full article
(This article belongs to the Special Issue Deep Learning Methods for Natural Language Processing)
Show Figures

Figure 1

19 pages, 5264 KiB  
Article
E2H Distance-Weighted Minimum Reference Set for Numerical and Categorical Mixture Data and a Bayesian Swap Feature Selection Algorithm
by Yuto Omae and Masaya Mori
Mach. Learn. Knowl. Extr. 2023, 5(1), 109-127; https://doi.org/10.3390/make5010007 - 11 Jan 2023
Cited by 1 | Viewed by 1826
Abstract
Generally, when developing classification models using supervised learning methods (e.g., support vector machine, neural network, and decision tree), feature selection, as a pre-processing step, is essential to reduce calculation costs and improve the generalization scores. In this regard, the minimum reference set (MRS), [...] Read more.
Generally, when developing classification models using supervised learning methods (e.g., support vector machine, neural network, and decision tree), feature selection, as a pre-processing step, is essential to reduce calculation costs and improve the generalization scores. In this regard, the minimum reference set (MRS), which is a feature selection algorithm, can be used. The original MRS considers a feature subset as effective if it leads to the correct classification of all samples by using the 1-nearest neighbor algorithm based on small samples. However, the original MRS is only applicable to numerical features, and the distances between different classes cannot be considered. Therefore, herein, we propose a novel feature subset evaluation algorithm, referred to as the “E2H distance-weighted MRS,” which can be used for a mixture of numerical and categorical features and considers the distances between different classes in the evaluation. Moreover, a Bayesian swap feature selection algorithm, which is used to identify an effective feature subset, is also proposed. The effectiveness of the proposed methods is verified based on experiments conducted using artificially generated data comprising a mixture of numerical and categorical features. Full article
(This article belongs to the Special Issue Recent Advances in Feature Selection)
Show Figures

Figure 1

31 pages, 1054 KiB  
Systematic Review
XAIR: A Systematic Metareview of Explainable AI (XAI) Aligned to the Software Development Process
by Tobias Clement, Nils Kemmerzell, Mohamed Abdelaal and Michael Amberg
Mach. Learn. Knowl. Extr. 2023, 5(1), 78-108; https://doi.org/10.3390/make5010006 - 11 Jan 2023
Cited by 20 | Viewed by 10562
Abstract
Currently, explainability represents a major barrier that Artificial Intelligence (AI) is facing in regard to its practical implementation in various application domains. To combat the lack of understanding of AI-based systems, Explainable AI (XAI) aims to make black-box AI models more transparent and [...] Read more.
Currently, explainability represents a major barrier that Artificial Intelligence (AI) is facing in regard to its practical implementation in various application domains. To combat the lack of understanding of AI-based systems, Explainable AI (XAI) aims to make black-box AI models more transparent and comprehensible for humans. Fortunately, plenty of XAI methods have been introduced to tackle the explainability problem from different perspectives. However, due to the vast search space, it is challenging for ML practitioners and data scientists to start with the development of XAI software and to optimally select the most suitable XAI methods. To tackle this challenge, we introduce XAIR, a novel systematic metareview of the most promising XAI methods and tools. XAIR differentiates itself from existing reviews by aligning its results to the five steps of the software development process, including requirement analysis, design, implementation, evaluation, and deployment. Through this mapping, we aim to create a better understanding of the individual steps of developing XAI software and to foster the creation of real-world AI applications that incorporate explainability. Finally, we conclude with highlighting new directions for future research. Full article
(This article belongs to the Special Issue Advances in Explainable Artificial Intelligence (XAI))
Show Figures

Figure 1

19 pages, 2157 KiB  
Article
Learning Sentence-Level Representations with Predictive Coding
by Vladimir Araujo, Marie-Francine Moens and Alvaro Soto
Mach. Learn. Knowl. Extr. 2023, 5(1), 59-77; https://doi.org/10.3390/make5010005 - 09 Jan 2023
Viewed by 2500
Abstract
Learning sentence representations is an essential and challenging topic in the deep learning and natural language processing communities. Recent methods pre-train big models on a massive text corpus, focusing mainly on learning the representation of contextualized words. As a result, these models cannot [...] Read more.
Learning sentence representations is an essential and challenging topic in the deep learning and natural language processing communities. Recent methods pre-train big models on a massive text corpus, focusing mainly on learning the representation of contextualized words. As a result, these models cannot generate informative sentence embeddings since they do not explicitly exploit the structure and discourse relationships existing in contiguous sentences. Drawing inspiration from human language processing, this work explores how to improve sentence-level representations of pre-trained models by borrowing ideas from predictive coding theory. Specifically, we extend BERT-style models with bottom-up and top-down computation to predict future sentences in latent space at each intermediate layer in the networks. We conduct extensive experimentation with various benchmarks for the English and Spanish languages, designed to assess sentence- and discourse-level representations and pragmatics-focused assessments. Our results show that our approach improves sentence representations consistently for both languages. Furthermore, the experiments also indicate that our models capture discourse and pragmatics knowledge. In addition, to validate the proposed method, we carried out an ablation study and a qualitative study with which we verified that the predictive mechanism helps to improve the quality of the representations. Full article
(This article belongs to the Special Issue Deep Learning Methods for Natural Language Processing)
Show Figures

Figure 1

16 pages, 848 KiB  
Article
IPPT4KRL: Iterative Post-Processing Transfer for Knowledge Representation Learning
by Weihang Zhang, Ovidiu Șerban, Jiahao Sun and Yike Guo
Mach. Learn. Knowl. Extr. 2023, 5(1), 43-58; https://doi.org/10.3390/make5010004 - 06 Jan 2023
Viewed by 1719
Abstract
Knowledge Graphs (KGs), a structural way to model human knowledge, have been a critical component of many artificial intelligence applications. Many KG-based tasks are built using knowledge representation learning, which embeds KG entities and relations into a low-dimensional semantic space. However, the quality [...] Read more.
Knowledge Graphs (KGs), a structural way to model human knowledge, have been a critical component of many artificial intelligence applications. Many KG-based tasks are built using knowledge representation learning, which embeds KG entities and relations into a low-dimensional semantic space. However, the quality of representation learning is often limited by the heterogeneity and sparsity of real-world KGs. Multi-KG representation learning, which utilizes KGs from different sources collaboratively, presents one promising solution. In this paper, we propose a simple, but effective iterative method that post-processes pre-trained knowledge graph embedding (IPPT4KRL) on individual KGs to maximize the knowledge transfer from another KG when a small portion of alignment information is introduced. Specifically, additional triples are iteratively included in the post-processing based on their adjacencies to the cross-KG alignments to refine the pre-trained embedding space of individual KGs. We also provide the benchmarking results of existing multi-KG representation learning methods on several generated and well-known datasets. The empirical results of the link prediction task on these datasets show that the proposed IPPT4KRL method achieved comparable and even superior results when compared against more complex methods in multi-KG representation learning. Full article
(This article belongs to the Special Issue Language Processing and Knowledge Extraction)
Show Figures

Figure 1

14 pages, 2214 KiB  
Concept Paper
Detecting Arabic Cyberbullying Tweets Using Machine Learning
by Alanoud Mohammed Alduailaj and Aymen Belghith
Mach. Learn. Knowl. Extr. 2023, 5(1), 29-42; https://doi.org/10.3390/make5010003 - 05 Jan 2023
Cited by 14 | Viewed by 5942
Abstract
The advancement of technology has paved the way for a new type of bullying, which often leads to negative stigma in the social setting. Cyberbullying is a cybercrime wherein one individual becomes the target of harassment and hatred. It has recently become more [...] Read more.
The advancement of technology has paved the way for a new type of bullying, which often leads to negative stigma in the social setting. Cyberbullying is a cybercrime wherein one individual becomes the target of harassment and hatred. It has recently become more prevalent due to a rise in the usage of social media platforms, and, in some severe situations, it has even led to victims’ suicides. In the literature, several cyberbullying detection methods are proposed, but they are mainly focused on word-based data and user account attributes. Furthermore, most of them are related to the English language. Meanwhile, only a few papers have studied cyberbullying detection in Arabic social media platforms. This paper, therefore, aims to use machine learning in the Arabic language for automatic cyberbullying detection. The proposed mechanism identifies cyberbullying using the Support Vector Machine (SVM) classifier algorithm by using a real dataset obtained from YouTube and Twitter to train and test the classifier. Moreover, we include the Farasa tool to overcome text limitations and improve the detection of bullying attacks. Full article
(This article belongs to the Section Data)
Show Figures

Figure 1

15 pages, 3276 KiB  
Article
Synthetic Data Generation for Visual Detection of Flattened PET Bottles
by Vitālijs Feščenko, Jānis Ārents and Roberts Kadiķis
Mach. Learn. Knowl. Extr. 2023, 5(1), 14-28; https://doi.org/10.3390/make5010002 - 29 Dec 2022
Viewed by 2558
Abstract
Polyethylene terephthalate (PET) bottle recycling is a highly automated task; however, manual quality control is required due to inefficiencies of the process. In this paper, we explore automation of the quality control sub-task, namely visual bottle detection, using convolutional neural network (CNN)-based methods [...] Read more.
Polyethylene terephthalate (PET) bottle recycling is a highly automated task; however, manual quality control is required due to inefficiencies of the process. In this paper, we explore automation of the quality control sub-task, namely visual bottle detection, using convolutional neural network (CNN)-based methods and synthetic generation of labelled training data. We propose a synthetic generation pipeline tailored for transparent and crushed PET bottle detection; however, it can also be applied to undeformed bottles if the viewpoint is set from above. We conduct various experiments on CNNs to compare the quality of real and synthetic data, show that synthetic data can reduce the amount of real data required and experiment with the combination of both datasets in multiple ways to obtain the best performance. Full article
(This article belongs to the Topic Applications in Image Analysis and Pattern Recognition)
Show Figures

Figure 1

13 pages, 981 KiB  
Article
Multimodal AutoML via Representation Evolution
by Blaž Škrlj, Matej Bevec and Nada Lavrač
Mach. Learn. Knowl. Extr. 2023, 5(1), 1-13; https://doi.org/10.3390/make5010001 - 23 Dec 2022
Cited by 2 | Viewed by 2364
Abstract
With the increasing amounts of available data, learning simultaneously from different types of inputs is becoming necessary to obtain robust and well-performing models. With the advent of representation learning in recent years, lower-dimensional vector-based representations have become available for both images and texts, [...] Read more.
With the increasing amounts of available data, learning simultaneously from different types of inputs is becoming necessary to obtain robust and well-performing models. With the advent of representation learning in recent years, lower-dimensional vector-based representations have become available for both images and texts, while automating simultaneous learning from multiple modalities remains a challenging problem. This paper presents an AutoML (automated machine learning) approach to automated machine learning model configuration identification for data composed of two modalities: texts and images. The approach is based on the idea of representation evolution, the process of automatically amplifying heterogeneous representations across several modalities, optimized jointly with a collection of fast, well-regularized linear models. The proposed approach is benchmarked against 11 unimodal and multimodal (texts and images) approaches on four real-life benchmark datasets from different domains. It achieves competitive performance with minimal human effort and low computing requirements, enabling learning from multiple modalities in automated manner for a wider community of researchers. Full article
Show Figures

Figure 1

Previous Issue
Next Issue
Back to TopTop