Advances in Data Science and Machine Learning

A special issue of Electronics (ISSN 2079-9292). This special issue belongs to the section "Artificial Intelligence".

Deadline for manuscript submissions: 15 June 2024 | Viewed by 10610

Special Issue Editors


E-Mail Website
Guest Editor
School of Computer Science and Technology, University of Luton, Luton LU1 3JU, UK
Interests: AI-enabled 5G vertical applications; IoT systems; natural language processing; network protocols and management

E-Mail Website
Guest Editor
School of Computing and Data Science Research Centre, University of Derby, Derby DE22 3AW, UK
Interests: data science; machine learning; knowledge discovery and representation; semantic technologies; deep machine learning; natural language processing
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

The Special Issue on "Advances in Data Science and Machine Learning" aims to provide a platform for researchers to showcase their latest research and findings in the area of data mining and data analysis.  Data mining has attracted much attention in the past year and has witnessed considerable advancements. These include new algorithms in data mining; new application areas that data mining could be applied to and make a difference to the domain; and new technologies that enhance the deployment of data mining and analysis instruments.

The Special Issue will cover a wide range of topics related to Data Science and Machine Learning, including, but not limited to, AI algorithms, data collection, data storage, semantics, applications enabled by data mining and analysis, and development environments. Contributions from interdisciplinary teams combining their expertise are encouraged. We also welcome Systematic Survey papers for the latest advancements in the relevant subject areas.

The use of Data Science has significant potential in many sectors, but it also poses challenges. For example, the latest developments in sensor development and networking provide ways to collect almost any data. However, privacy preserving is a challenge, hence the proposal of relevant algorithms and infrastructure. In addition, the widely available data and knowledge can be shared via network support. However, the reuse of data and knowledge is challenging since the semantics of data, knowledge discovery, and perception require relevant algorithms and sufficient network support. Last but not least, the validation of the results of data mining is key to the success of data analysis. The data collection approach, data storage, and how to transmit or share data stimulate much relevant research. In addition, the application areas of Data Science and Machine Learning require considerable computation, and edge and cloud computing compensate for the shortage in computation power of local devices. However, on the other hand, efforts have also been made to optimize the algorithms to allow them to be applied at less powerful end nodes, such as with a mobile handset. In this case, the network infrastructure plays a role in the distributed and federated learning.

This Special Issue is important because it provides a platform for researchers to share their latest findings and perspectives on the application of data mining and analysis and to encourage interdisciplinary collaboration among data mining, application development, security, etc. It is an exciting opportunity for researchers to contribute to the development of new technologies and methodologies that have the potential to significantly impact multiple domains.

Dr. Enjie Liu
Dr. Hongqing Yu
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Electronics is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2400 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • data mining
  • machine learning
  • knowledge discovery and representation
  • semantic technologies
  • deep machine learning
  • natural language processing

Published Papers (3 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Jump to: Review

20 pages, 1477 KiB  
Article
LaANIL: ANIL with Look-Ahead Meta-Optimization and Data Parallelism
by Vasu Tammisetti, Kay Bierzynski, Georg Stettinger, Diego P. Morales-Santos, Manuel Pegalajar Cuellar and Miguel Molina-Solana
Electronics 2024, 13(8), 1585; https://doi.org/10.3390/electronics13081585 - 22 Apr 2024
Viewed by 461
Abstract
Meta-few-shot learning algorithms, such as Model-Agnostic Meta-Learning (MAML) and Almost No Inner Loop (ANIL), enable machines to learn complex tasks quickly with limited data and based on previous experience. By maintaining the inner loop head of the neural network, ANIL leads to simpler [...] Read more.
Meta-few-shot learning algorithms, such as Model-Agnostic Meta-Learning (MAML) and Almost No Inner Loop (ANIL), enable machines to learn complex tasks quickly with limited data and based on previous experience. By maintaining the inner loop head of the neural network, ANIL leads to simpler computations and reduces the complexity of MAML. Despite its benefits, ANIL suffers from issues like accuracy variance, slow initial learning, and overfitting, hardening its adaptation and generalization. This work proposes “Look-Ahead ANIL” (LaANIL), an enhancement to ANIL for better learning. LaANIL reorganizes ANIL’s internal architecture, integrating parallel computing techniques (to process multiple training examples simultaneously across computing units) and incorporating Nesterov momentum (which accelerates convergence by adjusting the learning rate based on past gradient information and extracting informative features for look-ahead gradient computation). These additional features make our model more state-of-the-art capable and better edge-compatible and thus improve few-short learning by enabling models to quickly adapt to new information and tasks. LaANIL’s effectiveness is validated on established meta-few-shot learning datasets, including FC100, CIFAR-FS, Mini-ImageNet, CUBirds-200-2011, and Tiered-ImageNet. The proposed model achieved an increased validation accuracy by 7 ± 0.7% and a variance reduction by 44 ± 4% in two-way two-shot classification as well as increased validation by 5 ± 0.4% and a variance reduction by 18 ± 2% in five-way five-shot classification on the FC100 dataset and similarly performed well on other datasets. Full article
(This article belongs to the Special Issue Advances in Data Science and Machine Learning)
Show Figures

Figure 1

16 pages, 2532 KiB  
Article
Enhancing Model Agnostic Meta-Learning via Gradient Similarity Loss
by Jae-Ho Tak and Byung-Woo Hong
Electronics 2024, 13(3), 535; https://doi.org/10.3390/electronics13030535 - 29 Jan 2024
Viewed by 493
Abstract
Artificial intelligence (AI) technology has advanced significantly, now capable of performing tasks previously believed to be exclusive to skilled humans. However, AI models, in contrast to humans who can develop skills with relatively less data, often require substantial amounts of data to emulate [...] Read more.
Artificial intelligence (AI) technology has advanced significantly, now capable of performing tasks previously believed to be exclusive to skilled humans. However, AI models, in contrast to humans who can develop skills with relatively less data, often require substantial amounts of data to emulate human cognitive abilities in specific areas. In situations where adequate pre-training data is not available, meta-learning becomes a crucial method for enhancing generalization. The Model Agnostic Meta-Learning (MAML) algorithm, which employs second-order derivative calculations to fine-tune initial parameters for better starting points, plays a pivotal role in this area. However, the computational demand of this method can be challenging for modern models with a large number of parameters. The concept of the Approximate Hessian Effect is introduced in this context, examining the effectiveness of second-order derivatives in identifying initial parameters conducive to high generalization performance. The study suggests the use of cosine similarity and squared error (L2 loss) as a loss function within the Approximate Hessian Effect framework to modify gradient weights, aiming for more generalizable model parameters. Additionally, an algorithm that relies on first-order calculations is presented, designed to achieve performance levels comparable to MAML. This approach was tested and compared with traditional MAML methods using both the MiniImagenet dataset and a modified MNIST dataset. The results were analyzed to evaluate its efficiency. Compared to previous studies that achieved good performance using only the first derivative, this approach is more efficient because it does not require iterative loops to converge on additional loss functions. Additionally, there is potential for further performance enhancement through hyperparameter tuning. Full article
(This article belongs to the Special Issue Advances in Data Science and Machine Learning)
Show Figures

Figure 1

Review

Jump to: Research

27 pages, 2719 KiB  
Review
A Comprehensive Review of DeepFake Detection Using Advanced Machine Learning and Fusion Methods
by Gourav Gupta, Kiran Raja, Manish Gupta, Tony Jan, Scott Thompson Whiteside and Mukesh Prasad
Electronics 2024, 13(1), 95; https://doi.org/10.3390/electronics13010095 - 25 Dec 2023
Viewed by 9263
Abstract
Recent advances in Generative Artificial Intelligence (AI) have increased the possibility of generating hyper-realistic DeepFake videos or images to cause serious harm to vulnerable children, individuals, and society at large with misinformation. To overcome this serious problem, many researchers have attempted to detect [...] Read more.
Recent advances in Generative Artificial Intelligence (AI) have increased the possibility of generating hyper-realistic DeepFake videos or images to cause serious harm to vulnerable children, individuals, and society at large with misinformation. To overcome this serious problem, many researchers have attempted to detect DeepFakes using advanced machine learning techniques and advanced fusion techniques. This paper presents a detailed review of past and present DeepFake detection methods with a particular focus on media-modality fusion and machine learning. This paper also provides detailed information on available benchmark datasets in DeepFake detection research. This review paper addressed the 67 primary papers that were published between 2015 and 2023 in DeepFake detection, including 55 research papers in image and video DeepFake detection methodologies and 15 research papers on identifying and verifying speaker authentication. This paper offers lucrative information on DeepFake detection research and offers a unique review analysis of advanced machine learning and modality fusion that sets it apart from other review papers. This paper further offers informed guidelines for future work in DeepFake detection utilizing advanced state-of-the-art machine learning and information fusion models that should support further advancement in DeepFake detection for a sustainable and safer digital future. Full article
(This article belongs to the Special Issue Advances in Data Science and Machine Learning)
Show Figures

Figure 1

Back to TopTop