Next Article in Journal
Analysis of Heat Transfer for the Copper–Water Nanofluid Flow through a Uniform Porous Medium Generated by a Rotating Rigid Disk
Previous Article in Journal
Feedback Stabilization of Quasi-One-Sided Lipschitz Nonlinear Discrete-Time Systems with Reduced-Order Observer
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Constrained Symmetric Non-Negative Matrix Factorization with Deep Autoencoders for Community Detection

1
College of Electronic and Information Engineering, Southwest University, Chongqing 400715, China
2
Training and Basic Education Management Office, Southwest University, Chongqing 400715, China
3
Key Laboratory of Cyber-Physical Fusion Intelligent Computing (South-Central Minzu University), State Ethnic Affairs Commission, Wuhan 430074, China
4
School of Computing and Information Science, Faculty of Science and Engineering, Anglia Ruskin University, Cambridge CB1 1PT, UK
*
Author to whom correspondence should be addressed.
Mathematics 2024, 12(10), 1554; https://doi.org/10.3390/math12101554
Submission received: 8 April 2024 / Revised: 6 May 2024 / Accepted: 8 May 2024 / Published: 16 May 2024

Abstract

:
Recently, community detection has emerged as a prominent research area in the analysis of complex network structures. Community detection models based on non-negative matrix factorization (NMF) are shallow and fail to fully discover the internal structure of complex networks. Thus, this article introduces a novel constrained symmetric non-negative matrix factorization with deep autoencoders (CSDNMF) as a solution to this issue. The model possesses the following advantages: (1) By integrating a deep autoencoder to discern the latent attributes bridging the original network and community assignments, it adeptly captures hierarchical information. (2) Introducing a graph regularizer facilitates a thorough comprehension of the community structure inherent within the target network. (3) By integrating a symmetry regularizer, the model’s capacity to learn undirected networks is augmented, thereby facilitating the precise detection of symmetry within the target network. The proposed CSDNMF model exhibits superior performance in community detection when compared to state-of-the-art models, as demonstrated by eight experimental results conducted on real-world networks.

1. Introduction

Networks are pervasive in the real world [1], and numerous intricate systems in nature can be effectively represented by networks, such as ecological networks [2] and social networks [3]. Consequently, network analysis has emerged as a paramount concern.The inherent community structure represents a fundamental aspect of networks, playing a pivotal role in network characterization. Community detection serves as the mechanism for investigating and identifying this essential feature. Community detection has emerged as a prominent research area in recent years [4]. It entails dividing networks into separate communities, marked by densely interconnected nodes within each community and fewer connections between different ones. The accurate detection of these communities can enhance our understanding of complex network structures. For instance, in social platforms, individuals with shared interests and close ties form cohesive communities, while those with no overlap are segregated into separate ones. Community detection is a research topic involving sociology, mathematics, physics, and other disciplines, with a wide range of applications, such as finding similar user groups in social networks and finding protein complexes in protein–protein interaction (PPI) networks.
Recently, significant efforts have been devoted to the study of community detection, leading to the establishment of a comprehensive model pyramid [5,6,7]. Numerous methodologies for community detection have been proposed and successfully applied to network structures, encompassing spectral-clustering-based methods [8], label-propagation-based methods [9], stochastic-block-model-based methods [10], and deep-learning-based methods [11]. It is important to note that non-negative matrix factorization (NMF) is widely utilized in community detection due to its commendable interpretability and scalability. The purpose of matrix decomposition is to decompose a given matrix into the product of two or more dimensionally reducing factor matrices [12]. NMF decomposition extracts the basic features of the network from the matrix, which is an effective way to identify hidden data [13]. The NMF-based methods approximately factorize the adjacency matrix A of a given network into two non-negative factor matrices, A U V ( U 0 , V 0 ) . The matrix V symbolizes the community membership, with matrix U acting as the projection matrix between the original network and the community members’ space. The adjacency matrix A is decomposed into a single factor in the SNMF model, A = V V T , which can effectively improve the capture of symmetry information in undirected networks. The adjacency matrix A is decomposed by three factors in the orthogonal three-factor non-negative matrix decomposition model, A = V S V T , where S provides more degrees of freedom for the decomposition. The CNMF model is often used in symbolic networks where the fundamental vectors are represented as convex combinations of data matrices, A ± = A ± U V T  [14]. The existing networks can be classified into six categories: topology networks, signed networks, attributed networks, multilayer networks, dynamic networks, and large-scale networks [15]. Although NMF-based models have achieved good results in some cases, most of the existing NMF-based models are shallow models, such as SNMF [16], GNMF [17], and CNMF [14]. However, we cannot completely capture all of the internal structure of the network through shallow models. Moreover, certain deep NMF models overlook symmetry, a fundamental characteristic of undirected networks. Deep NMF is a novel and effective feature extraction method that has been used in recent years, which can obtain a deeper representation by further recursive decomposition of the matrix results of the NMF algorithm. Deep NMF can discover the underlying hierarchical characteristics of the data, which prompts us to conduct further research on deep NMF [18]. The deep autoencoder, an extensively employed unsupervised learning algorithm in the field of deep learning [19], is capable of performing various tasks such as feature extraction, dimensionality reduction, and data reconstruction. In addition, it can also bridge the gap between low-level to high-level networks for optimal community detection [20]. The deep autoencoder consists of two components: an encoder component and a decoder component [21]. By employing layer-by-layer encoding, a high-dimensional dataset is transformed into a low-dimensional encoding, which can then be reconstructed back to its original high-dimensional form using layer-by-layer decoding.
This process of dimensionality reduction and reconstruction from high-dimensional information can not only help us extract feature information but can also remove noise and reconstruct clean data representation in the process, and feature extraction is more efficient in low-dimensional information [22]. By integrating non-negative matrix factorization and the deep autoencoder, we can effectively decompose the mapping matrix U to enhance the learning of hidden information that represents the intricate network structure [23]. The internal structure of the target network is learned, and the learning capacity of the undirected network is bolstered by integrating graph regularization and symmetric regularization. Building upon the insights gained from the preceding discussion, we introduce a novel methodology termed constrained symmetric non-negative matrix factorization with deep autoencoders (CSDNMF) to tackle the community detection challenge. The structure of the proposed model is shown in Figure 1. Our key contributions can be summarized concisely as follows:
  • The proposed model leverages a deep autoencoder to address the community detection challenge in undirected networks, proficiently capturing hierarchical information within the network. Additionally, it integrates graph regularization and symmetric regularization into the learning objective, thereby enhancing the model’s representation learning capability.
  • The convergence proof of the proposed CSDNMF model is provided. The mathematical convergence of the proposed model is proved by two steps: (a) The non-increasing nature of the objective function is established under the update rule through the utilization of an auxiliary function. (b) The update rule’s finite solution satisfies the Karush–Kuhn–Tucker (KKT) optimality conditions.
  • Extensive experiments are conducted to evaluate the efficacy of various NMF-based community detection models. The results indicate that CSDNMF outperforms state-of-the-art NMF-based community detection methods.
The paper is structured as follows: Section 2 provides an introduction to the fundamental concepts; Section 3 presents the proposed CSDNMF model; Section 4 conducts an analysis of the designed algorithms; Section 5 showcases the experimental results; and finally, in Section 6, a comprehensive summary of the entire text is presented.

2. Related Works

In this section, we present the requisite symbols and foundational knowledge, followed by a concise overview of the community detection problem and the NMF model. Finally, we elucidate the application of the NMF model in addressing the community detection problem.

2.1. Notations

In this article, bold capital letters are employed to denote matrices. For a matrix A, its i-th row vector, j-th column vector, (i,j)-th element, trace, and Frobenius norm are, respectively, A i , A j , A i j , tr ( A ) , A F 2 . The undirected and unweighted networks are denoted as G = ( V , E ) , where V = v 1 , v 2 , v n denotes a set of n nodes and E = e 1 , e 2 , e m denotes a set of m edges. The topology of an unweighted undirected network G is represented by the adjacency matrix A, where a i j equals 1 if an edge between nodes v i and v j exists; otherwise, it equals 0. In the case of an undirected network, the adjacency matrix A exhibits symmetry, with all diagonal elements being 0. We concentrate on the challenge of community detection within undirected networks.

2.2. Non-Negative Matrix Factorization (NMF)

The NMF algorithm is formally introduced by Lee and Seung in 1999 [24], providing a robust mathematical framework for reducing data dimensionality and extracting essential features. The goal of NMF is to represent a non-negative matrix A as the product of two non-negative matrices U and V, symbolized as A U V . Here, matrix A embodies the original dataset, whereas matrices U and V depict the basis matrix and constant matrix, respectively. Through this decomposition, NMF facilitates the exploration of latent structures and features within the data while effectively eliminating noise and redundant information, thereby accomplishing dimensionality reduction and feature extraction. NMF finds extensive applications across diverse areas, encompassing image processing [25], text mining [7], speech processing [26], and so on. Through the application of NMF, we can uncover latent patterns, themes, and features within our data, facilitating a deeper comprehension and enhanced utilization of the data. The non-negative matrix factorization technique is a potent tool that facilitates the simplification of intricate data and extraction of valuable insights, thereby holding immense significance for data analysis and machine learning.

2.3. Community Detection with NMF

Community detection fundamentally resembles a clustering problem, with nodes in the complex network serving as the objects to be clustered. NMF inherently possesses clustering capabilities. By applying orthogonal constraints to the matrix V, the NMF model can be seen as an alternative formulation of the K-means clustering model. Additionally, if the non-negative matrix A demonstrates symmetry, it can be transformed into a symmetric decomposition form, A V V T , which aligns with the spectral clustering model. Both K-means and spectral clustering have demonstrated their efficacy in node clustering, thus making NMF a natural choice for community detection. In addition, a large part of the reason why the NMF model is applied to the community detection problem is that the generation ability of NMF can explain the community structure well. Usually, the adjacency matrix A of the community network is decomposed, that is, A U V ( U 0 , V 0 ) , where U represents the community characteristic matrix, V represents the community indicator matrix, and  V i m represents the intensity of the i th node belonging to the m th community. NMF-based community detection models typically involve four essential stages [15]: feature matrix construction, NMF-based model construction, model solving, and community detection. The initial step entails the construction of the feature matrix by extracting features from network G and forming the corresponding matrix, denoted as A. In the subsequent stages, A is the executed factorization used to generate the community indicator matrix V. Finally, during the community extraction stage, regardless of whether the communities overlap, the final result is obtained based on the community indicator matrix V.

3. Methods and Theories

3.1. Deep Non-Negative Matrix Factorization

The NMF model represents the adjacency matrix of a complex original network as the product of two matrices and employs a mapping matrix solely to capture the underlying information within the intricate network. The information capacity of a matrix is evidently limited, whereas the intricate nature of a complex network poses challenges in its representation through a single layer of mapping. Consequently, NMF can solely capture simplistic information within the network. If the basis matrix U can be further decomposed, the basis matrix U is decomposed into the characteristic matrix of p levels; each layer of abstraction can effectively capture inter-node similarities at various levels of granularity, thereby enabling more comprehensive information extraction from the network and enhancing the model’s representation learning capability. The adjacency matrix A is specifically decomposed into p + 1 non-negative factor matrices in the following manner:
A U 1 U 2 U p V p ,
where V P R + k × n , U i R + r i 1 × r i ( 1 < i < p ) , and we set n = r 0 r 1 r p 1 r p = k .
The formula presented in Equation (1) enables a hierarchical organization of abstract comprehension within the p-layer, which can be given by the following factorizations:
V p 1 U p V p , V 2 U 3 U P V P , V 1 U 2 U P V P .
This intricate architecture yields more precise community detection outcomes. According to Equation (2), we deduce the following objective function:
min U i , V p L D = A U 1 U 2 U p V p F 2 , s . t . V p 0 , U i 0 , i = 1 , 2 , p .

3.2. Constrained Symmetric NMF with Deep Autoencoders (CSDNMF)

The deep autoencoder represents a prototypical unsupervised learning algorithm, wherein the fundamental concept involves multiplying the input data by a matrix to achieve dimensionality reduction. Subsequently, the resulting data is multiplied by the transpose of the preceding weight matrix to restore an approximate representation of the original data. If the dimensionality reduction–reconstruction process is represented by multi-layer matrices, then the dimensions of these matrices must be reduced layer by layer, and we can regard the dimensionality reduction–reconstruction process as a process of encoding and decoding. The high-dimensional data set is transformed into a lower-dimensional representation using layer-by-layer encoding and subsequently restored to its original high-dimensional form through layer-by-layer decoding. The closer the similarity between the pre-encoding and post-decoding data, the higher the quality of model training and the more meaningful the outcome of dimensionality reduction. The deep autoencoder consists of both an encoder and a decoder component, with Equation (3) specifically representing the decoder. To bolster the autoencoder’s capacity for representation learning, we combine the encoder component into Equation (3), thereby constructing an NMF model rooted in the autoencoder paradigm [27]. The encoder is the inverse process of the decoder, encoding the multilayer mapping into the initial matrix A. The objective function for the encoder is deduced as follows:
min U i , V p L E = V p U p T U 2 T U 1 T A F 2 , s . t . V p 0 , U i 0 , i = 1 , 2 , p .
In order to refine the precision of comprehending the internal geometry of the network, we incorporate a graph regularizer. Here, λ denotes the regularization parameter, tr ( · ) computes the trace of the enclosed matrix, L denotes the graph Laplacian, and we define L = D A (where D is a diagonal matrix comprising the row sums of A).
λ min U i , V p L r e g = λ tr V p L V p T .
The adjacency matrix of an undirected network G is inherently a symmetric square matrix. To effectively capture and characterize the symmetry inherent in the target network, we introduce a symmetric regularization term, which improves the representation learning ability of the model for undirected networks by establishing symmetric constraints between the lower-dimensional representation of the original matrix and its transpose. The symmetric regularization term limits the process of encoding and decoding to the category of undirected networks, denoted as:
1 2 μ L sym = U 1 U 2 U p V p U 1 U 2 U p V p T F 2 .
By integrating an encoder, decoder, graph regularization term, and symmetric regularization term, the learning objective of CSDNMF can be formulated as follows:
min U i , V p L = L D + L E + λ L reg + 1 2 μ L sym = A U 1 U 2 U p V p F 2 + V p U p T U 2 T U 1 T A F 2 + λ tr V p L V p T + 1 2 μ U 1 U 2 U p V p U 1 U 2 U p V p T F 2 .

4. Optimization

In order to expedite the approximation of the factor matrix in the proposed model, we employ a pre-training approach to acquire the initial estimation of factor matrices U i and V i . Utilizing pre-training techniques proves highly effective for gradually revealing low-dimensional nonlinear structures within high-dimensional data. This process significantly diminishes the training duration of the proposed model, thus enhancing its efficiency and scalability. Following the pre-training phase, global fine-tuning further refines the model. The efficacy of pre-training has been previously evidenced, particularly in the domain of deep autoencoder networks. Detailed explanations of the update rules for the model will be presented in this section.

4.1. Optimization of Update Rules for U i ( 1 i p )

The update rules of U i in fine-tuning are derived by considering U i as the object. Meanwhile, the objective function of CSDNMF is rewritten while keeping U i unchanged:
min U i L U i = A ω i 1 U i ω i + 1 V p F 2 + V p ω i + 1 T U i T ω i 1 T A F 2 + 1 2 μ ω i 1 U i ω i + 1 V p ω i 1 U i ω i + 1 V p T F 2 , s . t . U i 0 ,
where ω i 1 = U 1 U 2 U i 1 , ω i + 1 = U i + 1 U p 1 U p . When U i = U 1 , ω i 1 = 0 . When U i = U p , ω i + 1 = 0 .
The Lagrangian multiplier matrix Θ i is introduced to enforce non-negative constraints on U i , thereby enabling the solution of this equation. The significance of the Lagrange multiplier matrix is to effectively incorporate non-negative constraints into the optimization process, so as to ensure that the elements in the factor matrix in the whole process are non-negative. The resulting function can be expressed as follows:
min U i , Θ i L U i , Θ i = A ω i 1 U i ω i + 1 V p F 2 + V p ω i + 1 T U i T ω i 1 T A F 2 + 1 2 μ ω i 1 U i ω i + 1 V p ω i 1 U i ω i + 1 V p T F 2 tr Θ i U i T .
Based on the universally accepted properties X F 2 = tr X X T and tr ( A B ) = tr ( B A ) , we can derive the following result:
min U i L U i , Θ i = tr ( A A T + V p V p T 4 A V p T ω i + 1 T U i T ω i 1 T + ω i 1 U i ω i + 1 V p V p T ω i + 1 T U i T ω i 1 T + μ ω i 1 U i ω i + 1 V p V p T ω i + 1 T U i T ω i 1 T + ω i + 1 T U i T ω i 1 T A A T ω i 1 U i ω i + 1 μ ω i 1 U i ω i + 1 V p ω i 1 U i ω i + 1 V p Θ i U i T ) .
According to Lagrange’s theorem, in order to determine the minimum value of this expression, it is necessary for the partial derivative of L U i , Θ i with respect to U i to be equal to 0. Consequently, we can deduce the subsequent expression.
Θ i = 2 ω i 1 T ω i 1 U i ω i + 1 V p V p T ω i + 1 T 4 ω i 1 T A V p T ω i + 1 T + 2 μ ω i 1 T ω i 1 U i ω i + 1 V p V p T ω i + 1 T + 2 ω i 1 T A A T ω i 1 U i ω i + 1 ω i + 1 T 2 μ ω i 1 T V p T ω i + 1 T U i T ω i 1 T V p T ω i + 1 T = 0 .
Then, in conjunction with the complementary relaxation condition of the KKT condition, the equation is constrained in the iterative process to ensure the satisfaction of the KKT condition throughout; particularly by ensuring that the KKT multiplier is 0 at the extreme value, we can deduce the subsequent equation.
Θ i U i = ( 4 ω i 1 T A V p T ω i + 1 T + 2 M i 2 μ ω i 1 T V p T ω i + 1 T U i T ω i 1 T V p T ω i + 1 T ) U i = 0 ,
where M i = ω i 1 T ω i 1 U i ω i + 1 V p V p T ω i + 1 T + μ ω i 1 U i ω i + 1 V p V p T ω i + 1 T + ω i 1 T A A T ω i 1 U i ω i + 1 ω i + 1 T .
The update rule for U i can be derived by transforming Equation (11).
U i U i 2 ω i 1 T A V p T ω i + 1 T + μ ω i 1 T V p T ω i + 1 T U i T ω i 1 T V p T ω i + 1 T M i .

4.2. Optimization of Update Rules for V i ( 1 i p )

While maintaining the constancy of V i , the objective function of CSDNMF is reformulated as:
min V i L V i = A ω i V i F 2 + V i ω i T A F 2 + 1 2 μ ω i V i ω i V i T F 2 ,
where ω i = U 1 U i .
By introducing the Lagrangian multiplier matrix Θ i to enforce non-negative constraints on V i , we derive the following function:
min V i , Θ i L V i , Θ i = A ω i V i F 2 + V i ω i T A F 2 + 1 2 μ ω i V i ω i V i T F 2 tr Θ i V i T .
The expression of Optimization problem (14) can be further refined as follows:
min V i L ( V i , Θ i ) = tr ( A A T + V i V i T 4 A V i T ω i T + ω i V i V i T ω i T + ω i T A A T ω i + μ ω i V i V i T ω i T . μ ω i V i ω i V i Θ i V i T ) = 0 .
According to Lagrange’s extremum theorem and the Karush–Kuhn–Tucker (KKT) condition, we derive the following equation:
Θ i V i = ( 2 V i + 2 ω i T ω i V i + 2 μ ω i T ω i V i 2 μ ω i T V i T ω i T 4 ω i T A ) ) V i = 0 .
The update rule of V i is derived by transforming Equation (16).
V i V i 2 ω i T A + μ ω i T V i T ω i T V i + ω i T ω i V i + μ ω i T ω i V i .

4.3. Optimization of Update Rules for V p

While maintaining the constancy of V p , the objective function of CSDNMF is reformulated as:
min V p V p = A ω p V p F 2 + V p ω p T A F 2 + λ tr V p L V p T + 1 2 μ ω p V p ω p V p T F 2 , s . t . V p 0 ,
where ω p = U 1 U 2 U p .
By introducing the Lagrangian multiplier matrix Θ i to enforce non-negative constraints on V p , we derive the following function:
min V p , Θ p L V p , Θ p = A ω p V p F 2 + V p ω p T A F 2 + λ tr V p L V p T + 1 2 μ ω p V p ω p V p T F 2 tr Θ p V p T .
According to the deduction process of updating rules for U i and V i , Optimization problem (19) can be reformulated as follows:
min V p , Θ p L ( V p , Θ p ) = tr ( A A T + V p V p T 4 A V p T ω p T + ω p V p V p T ω p T + ω p T A A T ω p + μ ω p V p V p T ω p T μ V p T ω p T V p T ω p T + λ V p L V p T Θ p V p T ) .
Moreover, the subsequent equation can be derived as follows:
Θ p V p = ( 2 V p + 2 ω p T ω p V p + 2 μ ω p T ω p V p + 2 λ V p L 2 μ ω p T V p T ω p T 4 ω p T A ) V i = 0 .
Substitute L = D A into Equation (21):
Θ p V p = ( 2 V p + 2 ω p T ω p V p + 2 μ ω p T ω p V p + 2 λ V p D 2 λ V p A 2 μ ω p T V p T ω p T 4 ω p T A ) V i = 0 .
The update rule of V p , as transformed by Equation (22), is as follows:
V p V p 2 ω p T A + μ ω p T V p T ω p T + λ V p A V p + ω p T ω p V p + μ ω p T ω p V p + λ V p D .

4.4. Convergence Analysis

In this section, we establish the convergence of the proposed CSDNMF model using the update rules provided in Update rule (12) and Update rule (23). Since the update of V i does not impact the objective function outlined in Optimization problem (6), we only need to consider the influence of U i and V p . Given the non-negative constraint, the objective function’s lower limit in Optimization problem (6) is 0. Hence, the convergence analysis of the model can be simplified to proving the following two points: (a) The objective function in Optimization problem (6) demonstrates non-increasing behavior under the update rules outlined in (12) and (23). (b) The finite solutions yielded by the update rules in (12) and (23) adhere to the KKT optimality condition.
(a) The objective function in Equation (6) demonstrates non-increasing behavior under the update rules outlined in (12) and (23).
The non-incrementality of the objective function under the update rule will be demonstrated by employing the auxiliary function method, which is defined as follows:
Definition 1. 
The function M ( u , u ) is considered as an auxiliary function of F ( u ) when the following conditions are met.
M u , u F ( u ) a n d M ( u , u ) = F ( u ) .
Lemma 1. 
Depending on the following update rule for the auxiliary function M u , u , F ( u ) is non-increasing:
u t + 1 = argmin M ( u , u t )
The detailed proof of Lemma 1can be found in reference [28]; thus, we only need to appropriately define the auxiliary function to ensure that our update formula satisifies Equation (24).
Firstly, we establish the non-increasing property of the objective function in Optimization problem (6) under the update rule of U i . The U i -related component of the objective function is denoted by F ( u j k ) :
F u j k = A ω i 1 U i ω i + 1 V p F 2 + V p ω i + 1 T U i T ω i 1 T A F 2 + 1 2 μ ω i 1 U i ω i + 1 V p ω i 1 U i ω i + 1 V p T F 2 .
Firstly, we judiciously select the auxiliary function of F u j k in the following manner:
M ( u j k , u j k t ) = F ( u j k t ) + F ( u j k t ) ( u j k u j k t ) + ( ( 1 + μ ) ω i 1 T ω i 1 U i ω i + 1 V p V p T ω i + 1 T + ω i 1 T A A T ω i 1 U i ω i + 1 ω i + 1 T ) 2 u j k t × ( u j k u j k t ) 2 .
Proof. 
M ( u , u ) = F ( u ) is evidently apparent, so we need to demonstrate M u , u F ( u ) .    □
We expand the function F u j k at point u j k by employing a second-order Taylor series expansion:
F ( u j k ) = F ( u j k t ) + F ( u j k t ) ( u j k u j k t ) + 1 2 ( ( 1 + μ ) ω i 1 T ω i 1 ω i + 1 V p V p T ω i + 1 T . + ω i 1 T A A T ω i 1 ω i + 1 ω i + 1 T μ ω i 1 T V p T ω i + 1 T ω i 1 T V p T ω i + 1 T ) ( u j k u j k t ) 2 .
The comparison of Equation (26) with Equation (27) reveals that the condition M u , u F ( u ) can be expressed equivalently as follows:
( ( 1 + μ ) ω i 1 T ω i 1 U i ω i + 1 V p V p T ω i + 1 T + ω i 1 T A A T ω i 1 U i ω i + 1 ω i + 1 T ) u j k t ( ( 1 + μ ) ω i 1 T ω i 1 ω i + 1 V p V p T ω i + 1 T + ω i 1 T A A T ω i 1 ω i + 1 ω i + 1 T μ ω i 1 T V p T ω i + 1 T ω i 1 T V p T ω i + 1 T ) .
Given that u i j t > 0 , Inequality (28) can be equivalently expressed as the following equation:
( ( 1 + μ ) ω i 1 T ω i 1 U i ω i + 1 V p V p T ω i + 1 T + ω i 1 T A A T ω i 1 U i ω i + 1 ω i + 1 T ) ( ( 1 + μ ) ω i 1 T ω i 1 ω i + 1 V p V p T ω i + 1 T + ω i 1 T A A T ω i 1 ω i + 1 ω i + 1 T μ ω i 1 T V p T ω i + 1 T ω i 1 T V p T ω i + 1 T ) u j k t .
The following formula can be readily derived:
ω i 1 T ω i 1 U i ω i + 1 V p V p T ω i + 1 T = j , k ω i 1 T ω i 1 ω i + 1 V p V p T ω i + 1 T u j k t > ω i 1 T ω i 1 ω i + 1 V p V p T ω i + 1 T u j k t .
Similarly, the following formula can be obtained:
ω i 1 T A A T ω i 1 U i ω i + 1 ω i + 1 T > ω i 1 T A A T ω i 1 ω i + 1 ω i + 1 T u j k t .
The condition of M ( u , u ) F ( u ) is proven by combining Inequality (30) and Inequality (31), thereby establishing M ( u j k , u j k t ) as an auxiliary function of F ( u j k ) . The subsequent step involves demonstrating the essential equivalence between the updates presented in Update rule (12) and Update rule (23), and those outlined in Equation (24).
The auxiliary function in Equation (26) is substituted into Equation (24).
u j k t + 1 = argmin M ( u j k , u j k t ) M ( u j k , u j k t ) = F ( u j k t ) + ( ( 1 + μ ) ω i 1 T ω i 1 U i ω i + 1 V p V p T ω i + 1 T + ω i 1 T A A T ω i 1 U i ω i + 1 ω i + 1 T ) u j k t × ( u j k u j k t ) = 0 u j k = u j k t u j k t × F ( u j k t ) ( ( 1 + μ ) ω i 1 T ω i 1 U i ω i + 1 V p V p T ω i + 1 T + ω i 1 T A A T ω i 1 U i ω i + 1 ω i + 1 T ) u j k t + 1 = u j k t × 2 ω i 1 T A V p T ω i + 1 T + μ ω i 1 T V p T ω i + 1 T U i T ω i 1 T V p T ω i + 1 T ( ( 1 + μ ) ω i 1 T ω i 1 U i ω i + 1 V p V p T ω i + 1 T + ω i 1 T A A T ω i 1 U i ω i + 1 ω i + 1 T ) .
The update rule depicted in Equation (32) mirrors that of Update rule (12), thereby guaranteeing the non-increasing characteristic of the objective function according to the update rule outlined in (12). Similarly, we can affirm the non-increasing property of the objective function under the update rule delineated in (23) through a comparable methodology.
(b) The finite solutions yielded by the update rules in (12) and (23) adhere to the KKT optimality condition.
The convergence condition enables us to obtain U i = U i t + 1 = U i t = U i , where t represents the t-th iteration. The expansion of the update rule in Equation (12) can be derived as follows:
( 2 M i 4 ω i 1 T A V p T ω i + 1 T 2 μ ω i 1 T V p T ω i + 1 T U i T ω i 1 T V p T ω i + 1 T ) U i = 0 .
The equivalence between Equations (11) and (33) is evident, thereby satisfying the complementary slackness condition of KKT optimality. The update rule for V p can be demonstrated using the same methodology. The convergence of the objective function under the update rule in (12) and (23) is guaranteed, as evidenced by the content presented in this section.

4.5. Algorithm Analysis

The proposed CSDNMF model in this paper comprises two components:
  • Pre-training process: The pre-training process involves layer-by-layer training, commencing from the first layer and advancing up to the p-th layer, in alignment with the objective function. This ensures that each layer’s loss function is trained to minimize to the fullest extent possible. The time complexity of the pre-training process is O p t p n 2 r + n r 2 . Here, p denotes the number of layers in the layer configuration, t p represents the number of pre-training iterations, and r denotes the maximum value within the layer configuration.
  • Fine-tuning process: The update rules mandate the iterative optimization of U i , V i , and  V p for each layer until the model converges. The time complexity of the fine-tuning process is O p t f n 2 r + n r 2 , where t f represents the number of fine-tuning iterations.
The time complexity of the CSDNMF model can be summarized as
O p t f + t p n 2 r + n r 2 .
The procedure of the CSDNMF model is illustrated in Algorithm 1.
Algorithm 1 The optimization procedure of the CSDNMF model
Require: The adjacency matrix A
 layer configuration L
 number of communities k
 graph regular parameters λ
 symmetric regular parameters μ
Ensure: mapping matrix Ui
 feature matrix Vi
 community membership matrix Vp
 /*Pre-training*/
U1,V1 = Pro-training(A,l1)
for i = 2 : m do
  Ui,Vi= Pro-training(A,li−1)
end for
 /*Fine-tuning*/
 while not converged do
for i = 2 : m do
  update Ui according to Update rule (12)
  update Vi according to Update rule (17)
  update Vp according to Update rule (23)
end for
 endwhile
return Ui, Vi, Vp

5. Experiments

5.1. Evaluation Metric

In order to obtain precise experimental results, we utilize four evaluation metrics that have been empirically validated and extensively utilized as the assessment methodologies for our experimental outcomes. The four evaluation metrics are Normalized Mutual Information (NMI), Adjusted Rand Index (ARI), Accuracy (ACC), and F-score.

5.2. Dataset

In our experiment, seven data sets obtained in practical applications are used to evaluate the performance of each model, and the data sets used are all public and widely adopted data sets. In order to verify the performance of each model under different data sets, we select data sets with large spans in the number of nodes. The smallest dataset is Email, which has 1005 nodes, and the largest is Facebook, which has 22,470 nodes. The selection of data sets in this way can verify, in the experimental results, whether the proposed model has universal significance in data sets of different sizes. The particulars of each dataset are delineated in Table 1.

5.3. Comparison Models

To showcase the superiority of the proposed CSDNMF model, we have chosen seven prominent NMF-based community detection models for comparative analysis:
  • NMF: The NMF model, based on non-negative matrix factorization, serves as a fundamental approach for community detection [15].
  • SNMF: The SNMF algorithm employs a distinctive factor matrix to characterize the symmetry of undirected networks [16].
  • ONMF: The ONMF imposes an orthogonal constraint on the mapping matrix U [29].
  • MNMF: MNMF introduces a novel approach called Modularized NMF, which integrates the community structure into network embedding [30].
  • GNMF: The GNMF model, which integrates NMF with graph regularization, is proposed for addressing community detection problems [17].
  • DNMF: The DNMF model, a deep non-negative matrix factorization approach, exclusively incorporates the decoder component [27].
  • DANMF: The DANMF model is a deep autoencoder NMF, comprising an encoder and a decoder [27].

5.4. Experiments Results

The results of four evaluation metrics across various datasets and models are presented in Table 2, Table 3, Table 4, Table 5, Table 6, Table 7 and Table 8. The optimal outcomes are emphasized in the color blue. Repetitively execute ten iterations on each dataset and compute the average outcome. The table clearly demonstrates that the CSDNMF model proposed in this paper outperforms other models in the majority of cases. The CSDNMF model only ranks second in the F-score metric in the Wiki dataset, the ARI metric in the Citeser dataset, and the ACC metric in the Lastfm dataset. The results demonstrate that the integration of a deep autoencoder with the model outperforms the shallow model, and further, incorporating symmetric regularization into the deep model surpasses other deep models in terms of performance on undirected networks.

5.5. Parameter Sensitivity

The parameters λ and μ are used to adjust the contribution of graph regularization and symmetric regularization, respectively, and the range of both parameters is 10 3 , 10 2 , 10 1 , 10 0 , 10 1 . We conducted experiments to assess the sensitivity of parameter values on the Email and Lastfm datasets, respectively. When conducting parameter testing, the other parameter should be set to a value of 1. The experimental results are presented in Figure 2, Figure 3, Figure 4 and Figure 5. The results demonstrate the stability of CSDNMF across various values of λ and μ while emphasizing that appropriate parameter selection can enhance model performance and robustness. In this experiment, both λ and μ were set to 1.

5.6. Wilcoxon Signed Rank Test

To rigorously compare the performance of the proposed CSDNMF model in this paper with other state-of-the-art models, we conduct a widely recognized verification experiment known as the Wilcoxon Signed Rank Test [31]. The results from the Wilcoxon Signed Rank Test experiments, conducted at a specified significance level, are outlined in Table 9. The p-value stands as the key indicator denoting the significance level. Analyzing the data from Table 9, we confidently affirm that the CSDNMF model demonstrates significantly greater accuracy in community detection compared to other models, with a confidence level of 95.

6. Conclusions

The utilization of NMF-based models in community detection problems is driven by their advantageous traits. However, the linearity of NMF poses limitations, particularly when dealing with nonlinear complex networks. To overcome this constraint, we propose the novel CSDNMF model, which integrates deep learning techniques to outperform shallow models. Our proposed CSDNMF model leverages the deep learning capabilities of deep autoencoders, the effective learning abilities of graph regularization for capturing network internal structures, and the representation power of undirected networks through symmetric regularization. Through extensive experiments conducted on seven datasets and employing four evaluation metrics, we demonstrate the model’s superiority over other NMF-based models. Additionally, we showcase its robustness to regularization parameters.

Author Contributions

Conceptualization, W.Z., S.Y. and L.W.; Methodology, W.Z., S.Y., L.W. and W.G.; Software, W.Z., S.Y., L.W. and W.G.; Validation, W.Z., L.W., W.G. and M.-F.L.; Formal analysis, L.W., W.G. and M.-F.L.; Investigation, S.Y. and L.W.; Resources, L.W., W.G. and M.-F.L.; Data curation, L.W., W.G. and M.-F.L.; Writing—original draft, W.Z.; Writing—review and editing, S.Y. and M.-F.L.; Visualization, W.Z., S.Y., L.W. and W.G.; Supervision, S.Y., L.W. and W.G.; Project administration, M.-F.L.; Funding acquisition, M.-F.L. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the Open Fund of the Key Laboratory of Cyber-Physical Fusion Intelligent Computing (South-Central Minzu University), State Ethnic Affairs Commission under Grant CPFIC202303.

Data Availability Statement

The data used to support the findings of the study are available from the first author upon request. The author’s email address is zw848438346@163.com.

Conflicts of Interest

The authors declare no conflict of interest. The funders had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript; or in the decision to publish the results.

References

  1. Wang, Y.; Zou, L.; Ma, L.; Zhao, Z.; Guo, J. A survey on control for Takagi-Sugeno fuzzy systems subject to engineering-oriented complexities. Syst. Sci. Control Eng. 2021, 9, 334–349. [Google Scholar] [CrossRef]
  2. Cui, L.; Wang, J.; Sun, L.; Lv, C. Construction and optimization of green space ecological networks in urban fringe areas: A case study with the urban fringe area of Tongzhou district in Beijing. J. Clean. Prod. 2020, 276, 124266. [Google Scholar] [CrossRef]
  3. Sanchez-Vega, F.; Mina, M.; Armenia, J.; Chatila, W.K.; Luna, A.; La, K.; Dimitriadoy, S.; Liu, D.L.; Kantheti, H.S.; Heins, Z.; et al. Abstract 3302: The molecular landscape of oncogenic signaling pathways in The Cancer Genome Atlas. Cancer Res. 2018, 78, 3302. [Google Scholar] [CrossRef]
  4. Hu, J.; Zhang, H.; Liu, H.; Yu, X. A survey on sliding mode control for networked control systems. Int. J. Syst. Sci. 2021, 52, 1129–1147. [Google Scholar] [CrossRef]
  5. Fortunato, S. Community detection in graphs. Phys. Rep. 2010, 486, 75–174. [Google Scholar] [CrossRef]
  6. Das, S.; Biswas, A. Deployment of Information Diffusion for Community Detection in Online Social Networks: A Comprehensive Review. IEEE Trans. Comput. Soc. Syst. 2021, 8, 1083–1107. [Google Scholar] [CrossRef]
  7. Barman, P.C.; Iqbal, N.; Lee, S.Y. Nonnegative Matrix Factorization Based Text Mining: Feature Extraction and Classification. In Proceedings of the 13th International Conference, ICONIP 2006, Hong Kong, China, 3–6 October 2006; pp. 703–712. [Google Scholar]
  8. Tang, F.; Wang, C.; Su, J.; Wang, Y. Spectral clustering-based community detection using graph distance and node attributes. Comput. Stat. 2020, 35, 69–94. [Google Scholar] [CrossRef]
  9. Xie, J.; Szymanski, B.K. Community detection using a neighborhood strength driven Label Propagation Algorithm. In Proceedings of the 2011 IEEE Network Science Workshop, West Point, NY, USA, 22–24 June 2011; pp. 188–195. [Google Scholar] [CrossRef]
  10. Lyzinski, V.; Tang, M.; Athreya, A.; Park, Y.; Priebe, C.E. Community Detection and Classification in Hierarchical Stochastic Blockmodels. IEEE Trans. Netw. Sci. Eng. 2017, 4, 13–26. [Google Scholar] [CrossRef]
  11. Wu, L.; Zhang, Q.; Chen, C.H.; Guo, K.; Wang, D. Deep Learning Techniques for Community Detection in Social Networks. IEEE Access 2020, 8, 96016–96026. [Google Scholar] [CrossRef]
  12. Peng, C.; Hou, X.; Chen, Y.; Kang, Z.; Chen, C.; Cheng, Q. Global and local similarity learning in multi-kernel space for nonnegative matrix factorization. Knowl.-Based Syst. 2023, 279, 110946. [Google Scholar] [CrossRef]
  13. Fathi Hafshejani, S.; Moaberfard, Z. Initialization for nonnegative matrix factorization: A comprehensive review. Int. J. Data Sci. Anal. 2023, 16, 119–134. [Google Scholar] [CrossRef]
  14. Yan, C.; Chang, Z. Modularized convex nonnegative matrix factorization for community detection in signed and unsigned networks. Phys. A Stat. Mech. Its Appl. 2020, 539, 122904. [Google Scholar] [CrossRef]
  15. He, C.; Fei, X.; Cheng, Q.; Li, H.; Hu, Z.; Tang, Y. A Survey of Community Detection in Complex Networks Using Nonnegative Matrix Factorization. IEEE Trans. Comput. Soc. Syst. 2022, 9, 440–457. [Google Scholar] [CrossRef]
  16. Luo, X.; Liu, Z.; Jin, L.; Zhou, Y.; Zhou, M. Symmetric Nonnegative Matrix Factorization-Based Community Detection Models and Their Convergence Analysis. IEEE Trans. Neural Netw. Learn. Syst. 2022, 33, 1203–1215. [Google Scholar] [CrossRef] [PubMed]
  17. Deng, P.; Li, T.; Wang, H.; Wang, D.; Horng, S.J.; Liu, R. Graph Regularized Sparse NonNegative Matrix Factorization for Clustering. IEEE Trans. Comput. Soc. Syst. 2023, 10, 910–921. [Google Scholar] [CrossRef]
  18. Chen, W.-S.; Zeng, Q.; Pan, B. A survey of deep nonnegative matrix factorization. Neurocomputing 2022, 491, 305–320. [Google Scholar] [CrossRef]
  19. Tang, J.; Deng, C.; Huang, G.B.; Hou, J. A fast learning algorithm for multi-layer extreme learning machine. In Proceedings of the 2014 IEEE International Conference on Image Processing (ICIP), Paris, France, 27–30 October 2014; pp. 175–178. [Google Scholar] [CrossRef]
  20. Al-sharoa, E.; Rahahleh, B. Community detection in networks through a deep robust auto-encoder nonnegative matrix factorization. Eng. Appl. Artif. Intell. 2023, 118, 105657. [Google Scholar] [CrossRef]
  21. Lv, L.; Bardou, D.; Liu, Y.; Hu, P. Deep Autoencoder-like nonnegative matrix factorization with graph regularized for link prediction in dynamic networks. Appl. Soft Comput. 2023, 148, 110832. [Google Scholar] [CrossRef]
  22. Li, Y.; Chen, J.; Chen, C.; Yang, L.; Zheng, Z. Contrastive Deep Nonnegative Matrix Factorization For Community Detection. In Proceedings of the ICASSP 2024—2024 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Seoul, Republic of Korea, 14–19 April 2024; pp. 6725–6729. [Google Scholar] [CrossRef]
  23. Bengio, Y.; Courville, A.; Vincent, P. Representation Learning: A Review and New Perspectives. IEEE Trans. Pattern Anal. Mach. Intell. 2013, 35, 1798–1828. [Google Scholar] [CrossRef] [PubMed]
  24. Lee, D.; Seung, H. Learning the parts of objects by nonnegative matrix factorization. Nature 1999, 401, 788–791. [Google Scholar] [CrossRef]
  25. Luong, T.X.; Kim, B.-K.; Lee, S.-Y. Color image processing based on Nonnegative Matrix Factorization with Convolutional Neural Network. In Proceedings of the 2014 International Joint Conference on Neural Networks (IJCNN), Beijing, China, 6–11 July 2014; pp. 2130–2135. [Google Scholar] [CrossRef]
  26. Nie, S.; Liang, S.; Liu, W.; Zhang, X.; Tao, J. Deep Learning Based Speech Separation via NMF-Style Reconstructions. IEEE/ACM Trans. Audio Speech Lang. Process. 2018, 26, 2043–2055. [Google Scholar] [CrossRef]
  27. Ye, F.; Chen, C.; Zheng, Z. Deep Autoencoder-like Nonnegative Matrix Factorization for Community Detection. In Proceedings of the 27th ACM International Conference on Information and Knowledge Management, Torino, Italy, 22–26 October 2018; Association for Computing Machinery: New York, NY, USA, 2018. [Google Scholar] [CrossRef]
  28. Lee, D.; Seung, H.S. Algorithms for Nonnegative Matrix Factorization. In Advances in Neural Information Processing Systems; Leen, T., Dietterich, T., Tresp, V., Eds.; MIT Press: Cambridge, MA, USA, 2000; Volume 13. [Google Scholar]
  29. Choi, S. Algorithms for orthogonal nonnegative matrix factorization. In Proceedings of the 2008 IEEE International Joint Conference on Neural Networks (IEEE World Congress on Computational Intelligence), Hong Kong, China, 1–8 June 2008; pp. 1828–1832. [Google Scholar] [CrossRef]
  30. Ghadirian, M.; Bigdeli, N. Hybrid adaptive modularized tri-factor nonnegative matrix factorization for community detection in complex networks. Sci. Iran. 2023, 30, 1068–1084. [Google Scholar] [CrossRef]
  31. Taheri, S.M.; Hesamian, G. A generalization of the Wilcoxon signed-rank test and its applications. Stat. Pap. 2013, 54, 457–470. [Google Scholar] [CrossRef]
Figure 1. The structure of the CSDNMF model is depicted, with the upper left representing the encoder component, the upper right representing the decoder component, and the lower representing the symmetric regularization term.
Figure 1. The structure of the CSDNMF model is depicted, with the upper left representing the encoder component, the upper right representing the decoder component, and the lower representing the symmetric regularization term.
Mathematics 12 01554 g001
Figure 2. The influence of graph regularization λ on the Email dataset.
Figure 2. The influence of graph regularization λ on the Email dataset.
Mathematics 12 01554 g002
Figure 3. The influence of symmetric regularization μ on the Email dataset.
Figure 3. The influence of symmetric regularization μ on the Email dataset.
Mathematics 12 01554 g003
Figure 4. The influence of graph regularization λ on the Lastfm dataset.
Figure 4. The influence of graph regularization λ on the Lastfm dataset.
Mathematics 12 01554 g004
Figure 5. The influence of symmetric regularization μ on the Lastfm dataset.
Figure 5. The influence of symmetric regularization μ on the Lastfm dataset.
Mathematics 12 01554 g005
Table 1. Dataset and layer configuration.
Table 1. Dataset and layer configuration.
DatasetNodeCommunitiesLayer Configuration
Email1005421005-256-128-42
Wiki2405192405-256-128-19
Core270872708-256-64-7
Citeseer331263312-256-64-6
Facebook22,470422,470-512-64-4
Lastfm7624187625-256-128-18
Pubmed19,717319,717-512-64-3
Table 2. Performance evaluation of each model on the Email dataset.
Table 2. Performance evaluation of each model on the Email dataset.
Email
NMI ARI F-Score ACC
NMF0.66940.42980.49530.6597
SNMF0.68800.42230.53580.6897
ONMF0.69640.49180.56110.6896
MNMF0.23660.01260.02950.1352
GNMF0.66310.43620.50550.6152
DNMF0.67440.47130.56090.6612
DANMF0.68630.49190.57970.6736
CSDNMF0.70140.51460.59370.6973
The optimal outcomes are emphasized in the color blue.
Table 3. Performance evaluation of each model on the Wiki dataset.
Table 3. Performance evaluation of each model on the Wiki dataset.
Wiki
NMI ARI F-Score ACC
NMF0.26510.12940.20410.4252
SNMF0.29120.12230.20450.4397
ONMF0.28120.12730.20560.4478
MNMF0.04620.02150.02610.1063
GNMF0.27220.11830.21520.4135
DNMF0.28630.11210.19040.4083
DANMF0.31360.13520.22850.4525
CSDNMF0.32240.13940.22310.4567
The optimal outcomes are emphasized in the color blue.
Table 4. Performance evaluation of each model on the Core dataset.
Table 4. Performance evaluation of each model on the Core dataset.
Core
NMI ARI F-Score ACC
NMF0.31420.18730.34110.5298
SNMF0.33090.23050.38230.5462
ONMF0.20460.14100.28770.4494
MNMF0.00160.00120.01470.2044
GNMF0.33620.19370.36550.5389
DNMF0.34030.24660.38540.5016
DANMF0.34840.23810.39310.5374
CSDNMF0.37460.25380.40390.5545
The optimal outcomes are emphasized in the color blue.
Table 5. Performance evaluation of each model on the Citesser dataset.
Table 5. Performance evaluation of each model on the Citesser dataset.
Citesser
NMI ARI F-Score ACC
NMF0.11720.04460.14360.3322
SNMF0.11560.06560.13290.3491
ONMF0.10850.08330.13370.3551
MNMF0.00210.00130.00690.1065
GNMF0.13620.10060.14970.3521
DNMF0.13250.09620.15660.3427
DANMF0.12690.11240.14370.3516
CSDNMF0.15240.09560.16520.3626
The optimal outcomes are emphasized in the color blue.
Table 6. Performance evaluation of each model on the Lastfm dataset.
Table 6. Performance evaluation of each model on the Lastfm dataset.
Lastfm
NMI ARI F-Score ACC
NMF0.50430.37810.43450.7176
SNMF0.51440.41830.49770.7263
ONMF0.52970.40160.48610.7056
MNMF0.12960.06430.10160.3642
GNMF0.52910.42910.47620.7651
DNMF0.58110.47710.57830.7162
DANMF0.58740.49620.61290.7439
CSDNMF0.60780.58160.64690.7555
The optimal outcomes are emphasized in the color blue.
Table 7. Performance evaluation of each model on the Pumbed dataset.
Table 7. Performance evaluation of each model on the Pumbed dataset.
Pumbed
NMI ARI F-Score ACC
NMF0.14940.09390.57670.4314
SNMF0.16470.10550.54370.4837
ONMF0.15660.16470.51470.5171
MNMF0.00060.00150.01670.2644
GNMF0.16370.11290.60650.4933
DNMF0.17520.13270.52610.5579
DANMF0.21730.23310.62670.5916
CSDNMF0.24640.24730.66840.6077
The optimal outcomes are emphasized in the color blue.
Table 8. Performance evaluation of each model on the Facebook dataset.
Table 8. Performance evaluation of each model on the Facebook dataset.
Facebook
NMI ARI F-Score ACC
NMF0.06010.02800.33630.3919
SNMF0.06820.04110.36770.4006
ONMF0.06570.03950.35350.4155
MNMF0.00140.00090.10650.1273
GNMF0.07110.03060.35870.4236
DNMF0.09440.06740.44210.4697
DANMF0.10650.08110.47580.4966
CSDNMF0.12630.09840.52710.5563
The optimal outcomes are emphasized in the color blue.
Table 9. Wilcoxon Signed Rank Test.
Table 9. Wilcoxon Signed Rank Test.
Proposed ModelComparison Modelp-Value
CSDNMFNMF6.25 × 10 6
CSDNMFSNMF2.02 × 10 7
CSDNMFONMF1.31 × 10 6
CSDNMFGNMF4.12 × 10 7
CSDNMFMNMF7.35 × 10 7
CSDNMFDNMF2.80 × 10 4
CSDNMFDANMF1.63 × 10 4
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Zhang, W.; Yu, S.; Wang, L.; Guo, W.; Leung, M.-F. Constrained Symmetric Non-Negative Matrix Factorization with Deep Autoencoders for Community Detection. Mathematics 2024, 12, 1554. https://doi.org/10.3390/math12101554

AMA Style

Zhang W, Yu S, Wang L, Guo W, Leung M-F. Constrained Symmetric Non-Negative Matrix Factorization with Deep Autoencoders for Community Detection. Mathematics. 2024; 12(10):1554. https://doi.org/10.3390/math12101554

Chicago/Turabian Style

Zhang, Wei, Shanshan Yu, Ling Wang, Wei Guo, and Man-Fai Leung. 2024. "Constrained Symmetric Non-Negative Matrix Factorization with Deep Autoencoders for Community Detection" Mathematics 12, no. 10: 1554. https://doi.org/10.3390/math12101554

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop