Advances in Data Mining, Neural Networks and Deep Graph Learning

A special issue of Mathematics (ISSN 2227-7390). This special issue belongs to the section "Mathematics and Computer Science".

Deadline for manuscript submissions: 30 June 2024 | Viewed by 2972

Special Issue Editors


E-Mail Website
Guest Editor
School of Electronics and Information Engineering, Tongji University, No. 4800, Cao'an Highway, Shanghai 200070, China
Interests: data mining and machine learning; financial big data; graph neural network; deep learning and reinforcement learning
College of Intelligence and Computing, Tianjin University, No. 135 Yaguan Road, Jinnan District, Tianjin 300350, China
Interests: visualization and visual analytics; data mining; artificial intelligence
Department of Computer Science and Engineering, Shanghai Jiao Tong University, 800 Dongchuan Rd., Minhang District, Shanghai 200240, China
Interests: artificial intelligence

Special Issue Information

Dear Colleagues,

Various everyday tasks are being solved with the help of data mining these days. Due to the strong expressive power of graphs, solving real-world challenges with graph structure has recently attracted the interest of academia and industry in many domains, for instance, finance, social networks, biology, marketing, etc. 

Graph learning offers the advantage of exploiting the topological structure of graphs. As a crucial branch of artificial intelligence, deep graph learning methods, such as graph neural networks, network embedding, and representation learning, are highly effective in solving real-world challenges. The research community has contributed algorithms, techniques such as neural networks, social networks, knowledge graphs, and biological networks. Despite the rapid advancement, it is deserved to conduct in-depth research on data mining, neural networks and deep graph learning from both theoretical and applied perspectives to better support real-world data analysis. 

This Special Issue will feature recent advances in data mining, neural networks, and deep graph learning algorithms and techniques. This issue welcomes theoretical and applied research as well as their applications in real-world domains.

Dr. Dawei Cheng 
Dr. Zhibin Niu
Dr. Yiyi Zhang
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Mathematics is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • data mining
  • deep learning
  • big data analysis
  • neural networks
  • graph learning and computing

Published Papers (2 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

22 pages, 1353 KiB  
Article
An Optimized LSTM Neural Network for Accurate Estimation of Software Development Effort
by Anca-Elena Iordan
Mathematics 2024, 12(2), 200; https://doi.org/10.3390/math12020200 - 08 Jan 2024
Cited by 1 | Viewed by 985
Abstract
Software effort estimation has constituted a significant research theme in recent years. The more important provocation for project managers concerns reaching their targets within the fixed time boundary. Machine learning strategies can lead software management to an entire novel stage. The purpose of [...] Read more.
Software effort estimation has constituted a significant research theme in recent years. The more important provocation for project managers concerns reaching their targets within the fixed time boundary. Machine learning strategies can lead software management to an entire novel stage. The purpose of this research work is to compare an optimized long short-term memory neural network, based on particle swarm optimization, with six machine learning methods used to predict software development effort: K-nearest neighbours, decision tree, random forest, gradient boosted tree, multilayer perceptron, and long short-term memory. The process of effort estimation uses five datasets: China and Desharnais, for which outputs are expressed in person-hours; and Albrecht, Kemerer, and Cocomo81, for which outputs are measured in person-months. To compare the accuracy of these intelligent methods four metrics were used: mean absolute error, median absolute error, root mean square error, and coefficient of determination. For all five datasets, based on metric values, it was concluded that the proposed optimized long short-term memory intelligent method predicts more accurately the effort required to develop a software product. Python 3.8.12 programming language was used in conjunction with the TensorFlow 2.10.0, Keras 2.10.0, and SKlearn 1.0.1 to implement these machine learning methods. Full article
(This article belongs to the Special Issue Advances in Data Mining, Neural Networks and Deep Graph Learning)
Show Figures

Figure 1

14 pages, 2326 KiB  
Article
Attributed Graph Embedding with Random Walk Regularization and Centrality-Based Attention
by Yuxuan Yang, Beibei Han, Zanxi Ran, Min Gao and Yingmei Wei
Mathematics 2023, 11(8), 1830; https://doi.org/10.3390/math11081830 - 12 Apr 2023
Viewed by 1123
Abstract
Graph-embedding learning is the foundation of complex information network analysis, aiming to represent nodes in a graph network as low-dimensional dense real-valued vectors for the application in practical analysis tasks. In recent years, the study of graph network representation learning has received increasing [...] Read more.
Graph-embedding learning is the foundation of complex information network analysis, aiming to represent nodes in a graph network as low-dimensional dense real-valued vectors for the application in practical analysis tasks. In recent years, the study of graph network representation learning has received increasing attention from researchers, and, among them, graph neural networks (GNNs) based on deep learning are playing an increasingly important role in this field. However, the fact that higher-order neighborhood information cannot be used effectively is a problem of most existing graph neural networks. Moreover, it tends to ignore the influence of latent representation and structural properties on graph embedding. In hopes of solving these issues, we introduce centrality encoding to learn the node properties, add an attention mechanism consideration to better distinguish the significance of neighboring nodes, and introduce random walk regularization to make sample neighbors that consistently satisfy predetermined criteria. This allows us to learn a representation of a potential node. We tested the performance of our model on node-clustering and link prediction tasks using three widely recognized benchmark datasets. The outcomes of our experiments demonstrate that our model significantly surpasses the baseline method in both tasks, indicating that the graph embedding it generates is highly expressive. Full article
(This article belongs to the Special Issue Advances in Data Mining, Neural Networks and Deep Graph Learning)
Show Figures

Figure 1

Back to TopTop