Trustworthy Graph Neural Networks: Models and Applications

A special issue of Mathematics (ISSN 2227-7390). This special issue belongs to the section "Mathematics and Computer Science".

Deadline for manuscript submissions: closed (31 October 2023) | Viewed by 18095

Special Issue Editors


E-Mail Website
Guest Editor
School of Computer Science and Engineering, University of Electronic Science and Technology of China, Chengdu 610031, China
Interests: data mining; machine learning; artificial intelligence; information retrieval; social networks

E-Mail Website
Guest Editor
School of Computer Science, Beijing University of Posts and Telecommunications, 12472 Beijing, Beijing, China
Interests: data mining; machine learning; analysis of complex networks; network embedding; graph neural networks; community detection

Special Issue Information

Dear Colleagues,

In the era of big data, graph data has attracted considerable attention. We have witnessed the impressive performance of graph neural networks (GNNs) in dealing with graph data, as well as various real-world applications (e.g., recommender systems, molecular property prediction). The increasing number of works on GNNs indicates a global trend in both academic and industrial communities. Despite the progress made in GNNs, there are various open, unexplored, and unidentified challenges. One major concern is whether current GNNs are trustworthy. This is an inescapable problem when GNNs step into real-world applications, especially in risk-sensitive domains. To address this problem, we need to make GNNs more robust, explainable, and stable. Thus, there is a pressing demand for novel and advanced trustworthy GNNs. In this Special Issue, our goal is to bring together researchers and practitioners working in the areas of GNNs to address a wide range of theoretical and practical issues.

Prof. Dr. Zhao Kang
Prof. Dr. Xiao Wang
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Mathematics is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • graph neural networks
  • deep learning for graphs
  • graph representation learning
  • spectral graph theory
  • robust graph neural networks
  • explainable graph neural networks
  • stable graph neural networks
  • uncertainty in graph neural networks
  • graph neural networks related applications

Published Papers (9 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

19 pages, 4456 KiB  
Article
Distributed Fire Detection and Localization Model Using Federated Learning
by Yue Hu, Xinghao Fu and Wei Zeng
Mathematics 2023, 11(7), 1647; https://doi.org/10.3390/math11071647 - 29 Mar 2023
Cited by 1 | Viewed by 1212
Abstract
Fire detection and monitoring systems based on machine vision have been gradually developed in recent years. Traditional centralized deep learning model training methods transfer large amounts of video image data to the cloud, making image data privacy and confidentiality difficult. In order to [...] Read more.
Fire detection and monitoring systems based on machine vision have been gradually developed in recent years. Traditional centralized deep learning model training methods transfer large amounts of video image data to the cloud, making image data privacy and confidentiality difficult. In order to protect the data privacy in the fire detection system with heterogeneous data and to enhance its efficiency, this paper proposes an improved federated learning algorithm incorporating computer vision: FedVIS, which uses a federated dropout and gradient selection algorithm to reduce communication overhead, and uses a transformer to replace a traditional neural network to improve the robustness of federated learning in the context of heterogeneous data. FedVIS can reduce the communication overhead in addition to reducing the catastrophic forgetting of previous devices, improving convergence, and producing superior global models. In this paper’s experimental results, FedVIS outperforms the common federated learning methods FedSGD, FedAVG, FedAWS, and CMFL, and improves the detection effect by reducing communication costs. As the amount of clients increases, the accuracy of other algorithmic models decreases by 2–5%, and the number of communication rounds required increases significantly; meanwhile, our method maintains a superior detection performance while requiring roughly the same number of communication rounds. Full article
(This article belongs to the Special Issue Trustworthy Graph Neural Networks: Models and Applications)
Show Figures

Figure 1

17 pages, 2094 KiB  
Article
Local-Sample-Weighted Clustering Ensemble with High-Order Graph Diffusion
by Jianwen Gan, Yunhui Liang and Liang Du
Mathematics 2023, 11(6), 1340; https://doi.org/10.3390/math11061340 - 09 Mar 2023
Viewed by 987
Abstract
The clustering ensemble method has attracted much attention because it can improve the stability and robustness of single clustering methods. Among them, similarity-matrix-based methods or graph-based methods have had a wide range of applications in recent years. Most similarity-matrix-based methods calculate fully connected [...] Read more.
The clustering ensemble method has attracted much attention because it can improve the stability and robustness of single clustering methods. Among them, similarity-matrix-based methods or graph-based methods have had a wide range of applications in recent years. Most similarity-matrix-based methods calculate fully connected pairwise similarities by treating a base cluster as a whole and ignoring the importance of the relevance ranking of samples within the same base cluster. Since unreliable similarity estimates degrade clustering performance, constructing accurate similarity matrices is of great importance in applications. Higher-order graph diffusion based on reliable similarity matrices can further uncover potential connections between data. In this paper, we propose a more substantial graph-learning-based ensemble algorithm for local-sample-weighted clustering, which implicitly optimizes the adaptive weights of different neighborhoods based on the ranking importance of different neighbors. By further diffusion on the consensus matrix, we obtained an optimal consistency matrix with more substantial discriminative power, revealing the potential similarity relationship between samples. The experimental results showed that, compared with the second-best DREC algorithm, the accuracy of the proposed algorithm improved by 17.7%, and that of the normalized mutual information (NMI) algorithm improved by 15.88%. All empirical results showed that our clustering model consistently outperformed the related clustering methods. Full article
(This article belongs to the Special Issue Trustworthy Graph Neural Networks: Models and Applications)
Show Figures

Figure 1

13 pages, 805 KiB  
Article
Graph Learning for Attributed Graph Clustering
by Xiaoran Zhang, Xuanting Xie and Zhao Kang
Mathematics 2022, 10(24), 4834; https://doi.org/10.3390/math10244834 - 19 Dec 2022
Viewed by 2400
Abstract
Due to the explosive growth of graph data, attributed graph clustering has received increasing attention recently. Although deep neural networks based graph clustering methods have achieved impressive performance, the huge amount of training parameters make them time-consuming and memory- intensive. Moreover, real-world graphs [...] Read more.
Due to the explosive growth of graph data, attributed graph clustering has received increasing attention recently. Although deep neural networks based graph clustering methods have achieved impressive performance, the huge amount of training parameters make them time-consuming and memory- intensive. Moreover, real-world graphs are often noisy or incomplete and are not optimal for the clustering task. To solve these problems, we design a graph learning framework for the attributed graph clustering task in this study. We firstly develop a shallow model for learning a fine-grained graph from smoothed data, which sufficiently exploits both node attributes and topology information. A regularizer is also designed to flexibly explore the high-order information hidden in the data. To further reduce the computation complexity, we then propose a linear method with respect to node number n, where a smaller graph is learned based on importance sampling strategy to select m(mn) anchors. Extensive experiments on six benchmark datasets demonstrate that our proposed methods are not only effective but also more efficient than state-of-the-art techniques. In particular, our method surpasses many recent deep learning approaches. Full article
(This article belongs to the Special Issue Trustworthy Graph Neural Networks: Models and Applications)
Show Figures

Figure 1

19 pages, 2486 KiB  
Article
Fairness-Aware Predictive Graph Learning in Social Networks
by Lei Wang, Shuo Yu, Falih Gozi Febrinanto, Fayez Alqahtani and Tarek E. El-Tobely
Mathematics 2022, 10(15), 2696; https://doi.org/10.3390/math10152696 - 29 Jul 2022
Viewed by 1198
Abstract
Predictive graph learning approaches have been bringing significant advantages in many real-life applications, such as social networks, recommender systems, and other social-related downstream tasks. For those applications, learning models should be able to produce a great prediction result to maximize the usability of [...] Read more.
Predictive graph learning approaches have been bringing significant advantages in many real-life applications, such as social networks, recommender systems, and other social-related downstream tasks. For those applications, learning models should be able to produce a great prediction result to maximize the usability of their application. However, the paradigm of current graph learning methods generally neglects the differences in link strength, leading to discriminative predictive results, resulting in different performance between tasks. Based on that problem, a fairness-aware predictive learning model is needed to balance the link strength differences and not only consider how to formulate it. To address this problem, we first formally define two biases (i.e., Preference and Favoritism) that widely exist in previous representation learning models. Then, we employ modularity maximization to distinguish strong and weak links from the quantitative perspective. Eventually, we propose a novel predictive learning framework entitled ACE that first implements the link strength differentiated learning process and then integrates it with a dual propagation process. The effectiveness and fairness of our proposed ACE have been verified on four real-world social networks. Compared to nine different state-of-the-art methods, ACE and its variants show better performance. The ACE framework can better reconstruct networks, thus also providing a high possibility of resolving misinformation in graph-structured data. Full article
(This article belongs to the Special Issue Trustworthy Graph Neural Networks: Models and Applications)
Show Figures

Figure 1

20 pages, 9549 KiB  
Article
Knowledge-Based Scene Graph Generation with Visual Contextual Dependency
by Lizong Zhang, Haojun Yin, Bei Hui, Sijuan Liu and Wei Zhang
Mathematics 2022, 10(14), 2525; https://doi.org/10.3390/math10142525 - 20 Jul 2022
Cited by 3 | Viewed by 2018
Abstract
Scene graph generation is the basis of various computer vision applications, including image retrieval, visual question answering, and image captioning. Previous studies have relied on visual features or incorporated auxiliary information to predict object relationships. However, the rich semantics of external knowledge have [...] Read more.
Scene graph generation is the basis of various computer vision applications, including image retrieval, visual question answering, and image captioning. Previous studies have relied on visual features or incorporated auxiliary information to predict object relationships. However, the rich semantics of external knowledge have not yet been fully utilized, and the combination of visual and auxiliary information can lead to visual dependencies, which impacts relationship prediction among objects. Therefore, we propose a novel knowledge-based model with adjustable visual contextual dependency. Our model has three key components. The first module extracts the visual features and bounding boxes in the input image. The second module uses two encoders to fully integrate visual information and external knowledge. Finally, visual context loss and visual relationship loss are introduced to adjust the visual dependency of the model. The difference between the initial prediction results and the visual dependency results is calculated to generate the dependency-corrected results. The proposed model can obtain better global and contextual information for predicting object relationships, and the visual dependencies can be adjusted through the two loss functions. The results of extensive experiments show that our model outperforms most existing methods. Full article
(This article belongs to the Special Issue Trustworthy Graph Neural Networks: Models and Applications)
Show Figures

Figure 1

23 pages, 984 KiB  
Article
A Clustering Ensemble Framework with Integration of Data Characteristics and Structure Information: A Graph Neural Networks Approach
by Hang-Yuan Du and Wen-Jian Wang
Mathematics 2022, 10(11), 1834; https://doi.org/10.3390/math10111834 - 26 May 2022
Cited by 3 | Viewed by 1880
Abstract
Clustering ensemble is a research hotspot of data mining that aggregates several base clustering results to generate a single output clustering with improved robustness and stability. However, the validity of the ensemble result is usually affected by unreliability in the generation and integration [...] Read more.
Clustering ensemble is a research hotspot of data mining that aggregates several base clustering results to generate a single output clustering with improved robustness and stability. However, the validity of the ensemble result is usually affected by unreliability in the generation and integration of base clusterings. In order to address this issue, we develop a clustering ensemble framework viewed from graph neural networks that generates an ensemble result by integrating data characteristics and structure information. In this framework, we extract structure information from base clustering results of the data set by using a coupling affinity measure After that, we combine structure information with data characteristics by using a graph neural network (GNN) to learn their joint embeddings in latent space. Then, we employ a Gaussian mixture model (GMM) to predict the final cluster assignment in the latent space. Finally, we construct the GNN and GMM as a unified optimization model to integrate the objectives of graph embedding and consensus clustering. Our framework can not only elegantly combine information in feature space and structure space, but can also achieve suitable representations for final cluster partitioning. Thus, it can produce an outstanding result. Experimental results on six synthetic benchmark data sets and six real world data sets show that the proposed framework yields a better performance compared to 12 reference algorithms that are developed based on either clustering ensemble architecture or a deep clustering strategy. Full article
(This article belongs to the Special Issue Trustworthy Graph Neural Networks: Models and Applications)
Show Figures

Figure 1

16 pages, 2745 KiB  
Article
Multi-View Graph Clustering by Adaptive Manifold Learning
by Peng Zhao, Hongjie Wu and Shudong Huang
Mathematics 2022, 10(11), 1821; https://doi.org/10.3390/math10111821 - 25 May 2022
Cited by 4 | Viewed by 1828
Abstract
Graph-oriented methods have been widely adopted in multi-view clustering because of their efficiency in learning heterogeneous relationships and complex structures hidden in data. However, existing methods are typically investigated based on a Euclidean structure instead of a more suitable manifold topological structure. Hence, [...] Read more.
Graph-oriented methods have been widely adopted in multi-view clustering because of their efficiency in learning heterogeneous relationships and complex structures hidden in data. However, existing methods are typically investigated based on a Euclidean structure instead of a more suitable manifold topological structure. Hence, it is expected that a more suitable manifold topological structure will be adopted to carry out intrinsic similarity learning. In this paper, we explore the implied adaptive manifold for multi-view graph clustering. Specifically, our model seamlessly integrates multiple adaptive graphs into a consensus graph with the manifold topological structure considered. We further manipulate the consensus graph with a useful rank constraint so that its connected components precisely correspond to distinct clusters. As a result, our model is able to directly achieve a discrete clustering result without any post-processing. In terms of the clustering results, our method achieves the best performance in 22 out of 24 cases in terms of four evaluation metrics on six datasets, which demonstrates the effectiveness of the proposed model. In terms of computational performance, our optimization algorithm is generally faster or in line with other state-of-the-art algorithms, which validates the efficiency of the proposed algorithm. Full article
(This article belongs to the Special Issue Trustworthy Graph Neural Networks: Models and Applications)
Show Figures

Figure 1

14 pages, 1424 KiB  
Article
Robust Graph Neural Networks via Ensemble Learning
by Qi Lin, Shuo Yu, Ke Sun, Wenhong Zhao, Osama Alfarraj, Amr Tolba and Feng Xia
Mathematics 2022, 10(8), 1300; https://doi.org/10.3390/math10081300 - 14 Apr 2022
Cited by 4 | Viewed by 2635
Abstract
Graph neural networks (GNNs) have demonstrated a remarkable ability in the task of semi-supervised node classification. However, most existing GNNs suffer from the nonrobustness issues, which poses a great challenge for applying GNNs into sensitive scenarios. Some researchers concentrate on constructing an ensemble [...] Read more.
Graph neural networks (GNNs) have demonstrated a remarkable ability in the task of semi-supervised node classification. However, most existing GNNs suffer from the nonrobustness issues, which poses a great challenge for applying GNNs into sensitive scenarios. Some researchers concentrate on constructing an ensemble model to mitigate the nonrobustness issues. Nevertheless, these methods ignore the interaction among base models, leading to similar graph representations. Moreover, due to the deterministic propagation applied in most existing GNNs, each node highly relies on its neighbors, leaving the nodes to be sensitive to perturbations. Therefore, in this paper, we propose a novel framework of graph ensemble learning based on knowledge passing (called GEL) to address the above issues. In order to achieve interaction, we consider the predictions of prior models as knowledge to obtain more reliable predictions. Moreover, we design a multilayer DropNode propagation strategy to reduce each node’s dependence on particular neighbors. This strategy also empowers each node to aggregate information from diverse neighbors, alleviating oversmoothing issues. We conduct experiments on three benchmark datasets, including Cora, Citeseer, and Pubmed. GEL outperforms GCN by more than 5% in terms of accuracy across all three datasets and also performs better than other state-of-the-art baselines. Extensive experimental results also show that the GEL alleviates the nonrobustness and oversmoothing issues. Full article
(This article belongs to the Special Issue Trustworthy Graph Neural Networks: Models and Applications)
Show Figures

Figure 1

16 pages, 2405 KiB  
Article
Inferring from References with Differences for Semi-Supervised Node Classification on Graphs
by Yi Luo, Guangchun Luo, Ke Yan and Aiguo Chen
Mathematics 2022, 10(8), 1262; https://doi.org/10.3390/math10081262 - 11 Apr 2022
Cited by 6 | Viewed by 1605
Abstract
Following the application of Deep Learning to graphic data, Graph Neural Networks (GNNs) have become the dominant method for Node Classification on graphs in recent years. To assign nodes with preset labels, most GNNs inherit the end-to-end way of Deep Learning in which [...] Read more.
Following the application of Deep Learning to graphic data, Graph Neural Networks (GNNs) have become the dominant method for Node Classification on graphs in recent years. To assign nodes with preset labels, most GNNs inherit the end-to-end way of Deep Learning in which node features are input to models while labels of pre-classified nodes are used for supervised learning. However, while these methods can make full use of node features and their associations, they treat labels separately and ignore the structural information of those labels. To utilize information on label structures, this paper proposes a method called 3ference that infers from references with differences. Specifically, 3ference predicts what label a node has according to the features of that node in concatenation with both features and labels of its relevant nodes. With the additional information on labels of relevant nodes, 3ference captures the transition pattern of labels between nodes, as subsequent analysis and visualization revealed. Experiments on a synthetic graph and seven real-world graphs proved that this knowledge about label associations helps 3ference to predict accurately with fewer parameters, fewer pre-classified nodes, and varying label patterns compared with GNNs. Full article
(This article belongs to the Special Issue Trustworthy Graph Neural Networks: Models and Applications)
Show Figures

Figure 1

Back to TopTop