Next Article in Journal
On the Relationship between Corneal Biomechanics, Macrostructure, and Optical Properties
Previous Article in Journal
Word Spotting as a Service: An Unsupervised and Segmentation-Free Framework for Handwritten Documents
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Robust Tensor-Based Submodule Clustering for Imaging Data Using l12 Regularization and Simultaneous Noise Recovery via Sparse and Low Rank Decomposition Approach

1
Department of Electronics and Communication Engineering, National Institute of Technology Calicut, Calicut 673601, India
2
Department of Electronics and Instrumentation, Government Engineering College Kozhikode, Calicut 673005, India
3
Department of Computer Science, Norwegian University of Science and Technology, 2815 Gjøvik, Norway
*
Author to whom correspondence should be addressed.
J. Imaging 2021, 7(12), 279; https://doi.org/10.3390/jimaging7120279
Submission received: 15 November 2021 / Revised: 9 December 2021 / Accepted: 11 December 2021 / Published: 17 December 2021

Abstract

:
The massive generation of data, which includes images and videos, has made data management, analysis, information extraction difficult in recent years. To gather relevant information, this large amount of data needs to be grouped. Real-life data may be noise corrupted during data collection or transmission, and the majority of them are unlabeled, allowing for the use of robust unsupervised clustering techniques. Traditional clustering techniques, which vectorize the images, are unable to keep the geometrical structure of the images. Hence, a robust tensor-based submodule clustering method based on l 1 2 regularization with improved clustering capability is formulated. The l 1 2 induced tensor nuclear norm (TNN), integrated into the proposed method, offers better low rankness while retaining the self-expressiveness property of submodules. Unlike existing methods, the proposed method employs a simultaneous noise removal technique by twisting the lateral image slices of the input data tensor into frontal slices and eliminates the noise content in each image, using the principles of the sparse and low rank decomposition technique. Experiments are carried out over three datasets with varying amounts of sparse, Gaussian and salt and pepper noise. The experimental results demonstrate the superior performance of the proposed method over the existing state-of-the-art methods.

1. Introduction

Classification of data into sensible groups is essential in a wide variety of fields, such as engineering, medical science, business, marketing and many more [1,2]. The most popular approaches for classifying objects into groups are discriminant analysis and clustering techniques [2,3]. Discriminant analysis is a supervised learning method in which the class labels are already defined and the aim is to find the data that have not been labeled [2,4]. In clustering, the problem is to group the unlabeled data into sensible groups. Hence, clustering is useful in applications where there is little prior information about the available data [3]. Due to massive data generation in recent years, clustering has been found useful in various fields such as machine learning, pattern analysis, decision making, etc. [5]. The popular clustering algorithms proposed in recent years include hierarchical clustering, partitioning clustering, mixture resolve clustering, fuzzy clustering, and so on [2,6]. The methods described above take into account all of the dimensions of the input data during learning. Dealing with high-dimensional datasets, on the other hand, can be more difficult due to the curse of the dimensionality problem [7,8]. As the dimensionality of the data grows, the data can become sparser, increasing the computational complexity of clustering [5].
Even if the data are multidimensional, they can be expressed effectively in a union of low-dimensional space [9]. In real-world scenarios, the high-dimensional data would also be distributed across several low-dimensional subspaces [10,11]. Then, the aim of subspace clustering is to identify these subspaces and segment the data based on their dissimilarity [7,12]. Algebraic methods, matrix factorization methods, statistical methods, and spectral clustering methods are the major types of subspace clustering techniques [6,13,14]. Spectral clustering is simple to implement and can outperform traditional algorithms. Hence, it is the most popular method for high-dimensional data clustering [15]. Depending on the type of affinity matrices derived from the data, various spectral clustering algorithms have been proposed.
Shi et al. proposed a normalized spectral clustering method which measures the dissimilarity between different groups and the similarity within the group using a normalized Laplacian matrix [16]. Andrew et al. proposed another method with additional row normalization [17]. Then, the sparse subspace clustering (SSC) algorithm proposed by Elhamifar et al. utilizes the self-expressiveness property of the data [10]. The underlying theory behind the self-expressiveness property is that every data point lying in a particular subspace can be expressed as a linear combination of other data points that belong to the same subspace [1,10]. The SSC algorithm aims to find a sparse representation that corresponds to a minimal set of points belonging to the same subspace. Then, the solution of the optimization problem is used for spectral clustering [10]. Liu et al. proposed subspace segmentation by low rank representation (LRR) [18]. Similar to SSC, LRR also represents a given data point as the linear combination of other data points [14] but instead of sparsest representation, LRR tries to find the low rank representation.
When dealing with higher dimensional signals, such as images, all the aforementioned methods map the 2D images into one-dimensional vectors. This approach is not so effective in capturing the spatial structure information of the images. To address this problem, instead of vectorizing the imaging data, a new approach called the union of free submodule (UoFS) model was proposed, which preserves the spatial structure of the 2D data [18,19]. In this model, images are stacked together in a third order tensor space. Kernfeld et al. proposed sparse submodule clustering (SSmC), which combines the UoFS model with the self-expressiveness property exploited in the SSC algorithm. In this, each image is interpreted as a linear combination of remaining images in the dataset [20]. However, in SSmC, the correlation between images from the same submodule is not taken into account [21]. To consider the inner correlation, the low rank structure of the multi-linear data is exploited in the sparse and low-rank submodule clustering method (SLRSmC) proposed by Piao et al. [21]. Identical to the scalar product, the tensor product is utilized for constructing the submodule clustering method. SLRSmC, on the other hand, imposes a low rank constraint on each image in the tensor, rather than a tensor low rank constraint. Wu et al. resolved this problem by imposing a low tensor rank constraint using the tensor nuclear norm (TNN) [19].
For enforcing the low rank constraint, the methods proposed in [19,20,21] use l 1 norm instead of l 0 norm. This ensures that the optimization problem is convex since l 1 norm is considered the convex surrogate of l 0 norm [22]. Relying on the UoFS model, many extensions of the work proposed by Wu et al. were developed with the objective of addressing real-world scenarios, such as noise, incomplete observations, and so on. Francis et al. proposed a tensor-based single stage optimization framework for clustering imaging data under incomplete observations [6]. In this work, individual images with missing samples are fetched in sequence from the input data tensor for reconstruction. Further, reconstruction of the missing samples is carried out by the matrix completion [6]. In another work, Johnson et al. replaced the low tensor multirank equivalent TNN by employing weighted tensor nuclear norm minimization (WTNN) for a more accurate low rank representation [23]. Baburaj et al. proposed a noise robust tensor-based submodule identification approach, named re-weighted low rank tensor approximation and l 1 2 regularization ( RLRTA l 1 2 R ) to perform clustering in the presence of gross errors [24], using the re-weighted tensor nuclear norm. An error term was introduced into the model to separate noise and data, which brings noise robustness to the clustering technique. Xia et al. proposed a subspace clustering method for multi-view data in which the representation tensor is learned by means of weighted tensor Schatten p-norm minimization (WTSNM) [25]. In another work, Wu proposed a clustering-aware Laplacian regularized low-rank submodule clustering (CLLRSmC) model that exploits the local manifold structure of the data [26]. In this work, the nonlinear extension of the UoFS model which can adapt data drawn from a mixture of nonlinear manifolds was presented.
Concurrently, the principle of sparse and low rank decomposition of matrices and tensors was applied to many research problems for noise removal. Shijila et al. proposed a unified framework of simultaneous denoising and moving object detection using low rank approximation [27]. Jin et al. proposed an impulse noise removal algorithm named robust ALOHA, which employs a sparse and low rank decomposition of a Hankel structured matrix [28]. They modeled impulse noise as a sparse component, then restored the underlying image while preserving the original image features [28]. Similarly, Cao et al. proposed a subspace-based non-local low rank and sparse factorization (SNLRSF) method for hyperspectral image denoising [29].
Since real-world data are heavily influenced by noise, which reduces clustering efficiency, and current techniques are unable to completely recover the data from noise, we propose a robust tensor based submodule identification technique with improved clustering capability, taking the following factors into account.
  • A robust tensor-based submodule clustering algorithm is proposed in this paper, which combines the clustering of 2D images with simultaneous noise removal in a single framework. Real-world data, such as images and videos, are frequently subjected to noise during acquisition, transmission, or due to limitations imposed by material and technological resources. The presence of noise affects the performance of clustering algorithms. To limit the effects of noise, existing methods usually include a global error term in their optimization problem. However, following this approach will not fully remove the noise encountered in individual images.
  • Hence, this work proposes a simultaneous noise removal scheme based on twisting the third-order input data tensor, which allows lateral image slices to become frontal slices of the twisted data tensor. Furthermore, images are extracted from this tensor data one by one, and each image is subjected to a sparse and low rank decomposition approach. Unlike the existing clustering methods, this procedure can find and eliminate the noise content in each of the images from the data, and a clean noise-free data tensor can be obtained for further clustering.
  • To better capture the low rankness and self-expressiveness property, l 1 2 induced TNN is integrated into the proposed method. Furthermore, l 1 2 regularization is incorporated into the submodule identification term because of its ability to induce more sparsity. An optimization problem is formulated that enables the proposed method to perform improved clustering, even in the presence of noise by employing the capabilities of l 1 2 induced TNN and l 1 2 regularization, as well as simultaneous noise removal using sparse and low rank decomposition.

2. Technical Background

In this paper, tensors, matrices, vectors and scalars are denoted by calligraphic uppercase, bold uppercase, bold lowercase and nonbold letters, respectively. For a third-order tensor X , X ( : , l , m ) , X ( l , : , m ) and X ( l , m , : ) represent the ( l , m ) t h mode-1, mode-2 and mode-3 (tube) fibers respectively [19]. Similarly, X ( i , : , : ) , X ( : , i , : ) and X ( : , : , i ) denote the i t h horizontal, lateral and frontal slices, respectively. Then, the i t h frontal slice can also be represented by X ( i ) . Finally, x l , m , n represents the ( l , m , n ) t h element of X . The rest of this paper is organized as follows. Section 2 presents the technical background. The proposed optimization model and its solutions are described in Section 3. In Section 4, the performance of the proposed method is evaluated and the experimental results are presented. The conclusions are drawn in Section 5.

2.1. Sparse and Low Rank Matrix Decomposition

It is evident that natural images have a low rank structure and those of the same rank will be increased when they are affected by noise [30]. Hence, noise removal can be done by decomposing the data into a low rank and sparse component. Selecting the low rank part only would provide noise-free data. In the sparse and low rank decomposition method, a given matrix, X can be expressed as the sum of low rank component T and sparse component S [22,31]. The formulation is given by [27]
min T , S rank ( T ) + λ S 0 s . t . X = T + S
where, T denotes the low rank matrix, S represents the sparse matrix and λ represents the regularization term. However, solving Equation (1) is NP hard due to its non-convex nature [27]. In principal component pursuit (PCP), Candes et al. recovered the low rank matrix by convex programming tools by the following formulation,
min T , S T * + λ S 1 s . t . X = T + S
where . * represents the nuclear norm and . 1 represents the l 1 norm. The nuclear norm of a matrix T is given by the absolute sum of singular values, and the minimization of T * imposes a low rank nature [27]. The singular value decomposition (SVD) [32] approach with a specific threshold can be applied to obtain the low rank in which singular values are arranged in descending order. Since the first few singular values hold maximum energy, smaller singular values can be omitted, as those values usually represent noise or other sparse corruptions [27,33,34]. However, solving Equation (2) reduces the sparse content alone, and hence, mitigating factors, such as Gaussian noise or group sparsity, cannot be taken into account [22]. Zhou et al. extended the problem in Equation (2) by adding an equality constraint, X T S F 2 σ , into their model in order to handle Gaussian noise and other arbitrary corruptions, where σ is the threshold [22]. The formulation is given by [22]
min T , S T * + λ S 1 + μ 2 X T S F 2
where μ represents the penalty parameter [6,22].

2.2. Self-Expressiveness Property of Submodules

Since images and videos in real life often have noise components associated with them, the management and grouping has become more challenging. The basic problem of image clustering is to group a collection of N images Y = { Y i R n 1 × n 3 } i = 1 N into L categories [19]. However, the majority of existing clustering methods consider the images as vectors belonging to an n 1 n 3 -dimensional space. Although this is a reasonable approach in many cases, in image clustering, where the geometrical structure of the data have to be taken into account, it cannot provide satisfactory results. In the UoFS model, the images are considered to be lateral slices of a tensor Y R n 1 × N × n 3 [19]. The images can be assumed to belong to a union of free submodules. Then, the problem of finding the clusters is equivalent to finding out the submodules to which each image belongs. This approach takes into account the spatial aspects of the images. Let K n 3 denote the set of all tubes belonging to R 1 × 1 × n 3 . The above set of tubes can then be used to form a commutative ring under regular addition and t-product [19,35]. The set of n 1 × 1 × n 3 lateral slices can be denoted by K n 3 n 1 . Similar to vector spaces over field, K n 3 n 1 forms a free module over the ring K n 3 . As a result, a free submodule over the ring can be thought of as a generalized version of the subspace over the field [6,19].

3. Proposed Method

Under the UoFS model, making use of the self-expressiveness property, an image fetched from a submodule can be expressed as the t-linear combination of other images present in the same submodule, as shown in Figure 1. Hence, for a third-order tensor Y R n 1 × N × n 3 , there exists a coefficient tensor, Z R N × N × n 3 such that Y Y * Z [6]. Further, the tensor multirank can be used to capture the self-expressiveness property [19] using the tensor nuclear norm . , which is the tightest convex relaxation of the tensor multirank [36]. Since the structure of the solution Z also determines the performance of the clustering, a block diagonal structure is required for the coefficient tensor, Z , which can reveal the compactness between the intraclass components and the separation between the interclass components [8,37,38]. Hence, in the UoFS model, an f-block diagonal structure constraint is added [19]. In addition, images which belong to a single submodule are highly correlated, while those that belong to different submodules are slightly correlated; to capture this, a dissimilarity matrix, M [ 0 , 1 ] N × N is defined, where each entry indicates the dissimilarity between two images. The entries of M , m k , l are given by [19]
m k , l = 1 e x p 1 | Y ( : , k , : ) , Y ( : , l , : ) | γ
Here, Y ( : , k , : ) and Y ( : , l , : ) represent the k t h and l t h lateral slices of the 3rd order tensor Y , and γ is the empirical average of all 1 | Y ( : , k , : ) , Y ( : , l , : ) | [19]. Once an optimum coefficient tensor Z , is obtained, the clustering of data can be achieved using the spectral clustering technique in which the ( k , l ) t h entry of affinity matrix A can be calculated as [17]
a k , l = Z ( k , l , : ) F + Z ( l , k , : ) F
Incorporating all the factors mentioned above, the clustering problem is formulated into an optimization problem given by [19]
min Z Z + λ 1 k = 1 n 3 M Z ( k ) 1 + λ 2 Y Y Z F 2
where , . 1 and . F represent the element-wise multiplication operator, l 1 norm and Frobenius norm, respectively. Further, Z represents the tensor nuclear norm (TNN) [19]. In addition, λ 1 and λ 2 stand for the regularization parameters for the optimization problem.
Further, l 1 norm is used in the submodule structure constraint term, λ 1 k = 1 n 3 M Z ( k ) 1 as well as in TNN of Z in the above expression. The methods mentioned in [19,20,21], l 1 norm are used instead of l 0 norm for imposing the low rank constraint. The strong acceptance of l 1 minimization in sparsity-related problems is because of its convex nature and ability to provide the sparse solution with less computational bottleneck [39]. However, l 1 regularization is a loose approximation of l 0 regularization, and the performance will be limited in many applications [40,41]. Hence, to improve the performance, l q ( 0 < q < 1 ) regularization techniques can be employed. Therefore, in order to extract the sparsest structure for the vector x R N , from the observation y = Ax , the l q regularization problem is represented by
min x R N Ax y 2 2 + λ x q q
where y R m , A R m × N . Then, x q represents the l q quasi-norm and is defined by, x q = ( i = 1 N | x i | q ) 1 q .
The unit ball representations of all the norms are illustrated in Figure 2 in which l 2 norm has the spherical shape, whereas in l 1 norm, it is diamond shaped. It is obvious that l 1 regularization provides a sparser solution compared to l 2 norm since there is higher probability for the y = Ax line to coincide with the axes. However, as the value of q is again reduced, the unit ball can assume the shape as shown in Figure 2d.
Hence, the probability of achieving sparser solution is higher as the value of q is changed from 0 to 1. For q [ 1 2 , 1 ) , the solution will be sparser for a smaller value of q. No significant change is observed in the performance for q [ 0 , 1 2 ) [39,42,43]. Hence, l 1 2 regularization can be chosen as the optimum regularization method. In works such as [19,20,21], TNN was used for imposing the low rank constraint in their optimization problem. For a tensor X R n 1 × n 2 × n 3 , the expression for TNN with t-SVD X = U Σ V T is given by [19],
X = k = 1 n 3 i = 1 m i n ( n 1 , n 2 ) | Σ ^ ( i , i , k ) |
where in U R n 1 × n 1 × n 3 and V R n 2 × n 2 × n 3 are the orthogonal tensors. In addition, Σ R n 1 × n 2 × n 3 is an f-diagonal tensor and Σ ^ is its Fourier transform. As in Equation (8), TNN uses the l 1 norm to determine the absolute sum of the singular values in each frontal slice of the tensor Σ ^ . However, in comparison to the l 1 norm, l 1 2 regularization yields a more sparse solution [39,43]. Hence, to obtain a more accurate low tensor rank representation, l 1 2 regularization is incorporated, and the Equation (8) can be rewritten as
X 1 2 = k = 1 n 3 i = 1 m i n ( n 1 , n 2 ) | Σ ^ ( i , i , k ) |
where the above expression can be called l 1 2 induced TNN. In TNN, the frontal slices of the tensor Σ ^ contains n 3 frontal slices, each slice being a diagonal matrix with singular values σ 1 σ 2 . . . σ n m i n 0 , where n m i n = min ( n 1 , n 2 ) as entries. Then, we apply the half thresholding function proposed by Xu et al. over the vector σ = ( σ 1 , σ 2 , . . . , σ n m i n ) , and it can be expressed as [39],
h λ , 1 2 ( σ i ) = 2 3 σ i 1 + c o s 2 π 3 2 3 Ψ λ ( σ i ) , | σ i | > 54 3 4 ( λ ) 2 3 0 , otherwise
where Ψ λ ( σ i ) = arccos ( λ 8 ( | σ i | 3 ) 3 2 ) and i = 1 to n m i n . Then, using the non linear half thresholding operator, H λ , 1 2 ( . ) , we perform the expression given in Equation (10) for all elements of σ . The expression for the half thresholding operator H λ , 1 2 ( . ) is given by H λ , 1 2 ( σ ) = ( h λ , 1 2 ( σ 1 ) , h λ , 1 2 ( σ 2 ) . . . , h λ , 1 2 ( σ n m i n ) ) T h , where T h denotes the threshold value [39,43]. After repeating the process for all frontal slices of Σ ^ , the solution for l 1 2 induced TNN is obtained. Furthermore, the solution’s detailed procedure is summarized and can be found in Algorithm 1.
In real-life contexts, imperfections in an image may occur in different circumstances, such as during acquisition, from any of the display systems or due to the constraints of both material and technological resources [44]. In any of the ways, the presence of noise in the data may adversely affect the outcomes of the algorithms [45]. The accuracy of the clustering algorithms could be improved if the data become noise-free. To meet this objective, each image is extracted by twisting the data tensor, X R n 1 × N × n 3 developed for the clustering model, and X R n 1 × n 3 × N return the twisted tensor [46]. The k t h image is then transformed into the k t h frontal slice, X ( k ) R n 1 × n 3 of the twisted tensor, X , where k = 1 to N. This further allows each individual image to be taken in sequence by calling X ( : , : , i ) R n 1 × n 3 , where i = 1 to N. The removal of noise from an image can be achieved by the sparse and low rank matrix decomposition method and is already illustrated in Section 2.1. The concept of removing noise from a single image is given in Equation (3). Then, for N number of images, Equation (3) can be modified such that
min T , S k = 1 N T ( k ) * + k = 1 N S ( k ) 1 + 1 2 k = 1 N X ( k ) T ( k ) S ( k ) F 2
To incorporate all the challenges aforementioned, proposed method integrates the following aspects into its optimization problem.
  • Compared to TNN, l 1 2 induced TNN is able to capture better low rankness. In addition, due to its inherent noise robustness and better ability to catch the property of self-expressiveness, l 1 2 induced TNN is introduced into the proposed method.
  • Compared to l 1 norm, l 1 2 norm regularization is able to capture the f-block diagonal structure in a better way such that the submodule structure constraint is modified using l 1 2 norm.
  • To meet the objective of noise removal, we use the tensor X R n 1 × n 3 × N , where X is the twisted version of the noisy data tensor X R n 1 × N × n 3 . Afterwards, noise removal is carried out by employing the nuclear norm and l 1 norm minimization on each image to separate the noise content by combining the principles of sparse and low rank decomposition techniques. As a result, the underlying images are restored, and the sparse noise content is eliminated. This process delivers a noise-free data tensor for further clustering process.
Incorporating the aforementioned factors, the tensor, T R n 1 × n 3 × N is introduced into the proposed optimization problem such that T is the clean data tensor, where the noise removed images are stacked into its frontal slices, T ( k ) R n 1 × n 3 . Another tensor, S R n 1 × n 3 × N is defined, where the eliminated sparse noise content from each image is stored into its frontal slices, S ( k ) R n 1 × n 3 . In addition, the tensor, R R n 1 × N × n 3 is incorporated, where R is the twisted version of the clean data tensor T such that R is given for the clustering. Further, we employ variable splitting for Z into Equation (6) such that Z = C and Z = Q [21]. Combining all the above, the proposed optimization problem can be reformulated as
min C , Q , Z C 1 2 + λ 1 k = 1 n 3 M Q ( k ) 1 2 1 2 + λ 2 R R Z F 2 + λ 3 k = 1 N T ( k ) * + λ 4 k = 1 N S ( k ) 1 s . t . Z = C , Z = Q , R = T , k = 1 N X ( k ) = T ( k ) + S ( k )
where . 1 2 represents the l 1 2 induced TNN and . 1 2 1 2 represents l 1 2 norm. Further, . * , . 1 and . F denote the nuclear norm, l 1 norm and Frobenius norm respectively. Finally, X is the twisted version of the noisy data tensor X R n 1 × N × n 3 . In the above expression, λ 1 , λ 2 , λ 3 and λ 4 denote the regularization parameters of the proposed optimization problem and among them, λ 3 and λ 4 balance the effect of low rank and sparsity constraints [27]. The above constrained equation is transformed into a unconstrained one using the Augmented Lagrangian (AL) method [19,47] given by
L ( C , Q , Z , G 1 , G 2 , G 3 , G 4 ) = C 1 2 + λ 1 k = 1 n 3 M Q ( k ) 1 2 1 2 + λ 2 R R Z F 2 + λ 3 k = 1 N T ( k ) * + λ 4 k = 1 N S ( k ) 1 + G 1 , Z C + G 2 , Z Q + G 3 , R T + G 4 , X ( k ) T ( k ) S ( k ) + μ 2 Z C F 2 + Z Q F 2 + R T F 2 + k = 1 N X ( k ) T ( k ) S ( k ) F 2
where the tensors G 1 , G 2 , G 3 and G 4 are the Lagrangian multipliers, where μ 0 is the penalty parameter and . , . denotes the inner product [27]. The above problem can be solved by iteratively minimizing the Lagrangian L over one tensor while keeping the others constant [6].
C Subproblem: The update expression for C is given by
C [ j + 1 ] = arg min C C 1 2 + G 1 , Z C + μ 2 Z C F 2
The above expression can be transformed into the following form,
C [ j + 1 ] = arg min C C 1 2 + μ [ j ] 2 C Z [ j ] G 1 [ j ] μ [ j ] F 2
Solution to the above subproblem is obtained by,
C [ j + 1 ] = H τ Z [ j ] G 1 [ j ] μ [ j ]
where τ = 1 μ is the threshold value. The operation of H τ . is detailed in Algorithm 1.
Algorithm 1 Tensor singular value half thresholding.
  • Require: Z R N × N × n 3 , λ > 0 , μ > 0 , threshold, T h > 0
  • Ensure: Singular Value Half-thresholded, Z h t R N × N × n 3 as optimal solution
  • 1:  Z ^ = fft ( Z , 3 )
  • 2: for i= 1 to n 3  do
  • 3:    [ U , Σ , V ] = svd Z ^ ( i )
  • 4:    U ^ ( i ) = U , Σ ^ ( i ) = Σ , V ^ ( i ) = V
  • 5:    σ = diag ( Σ ^ ( i ) )
  • 6:    H λ , 1 2 ( σ ) = ( h λ , 1 2 ( σ 1 ) , h λ , 1 2 ( σ 2 ) , h λ , 1 2 ( σ N ) ) T h
  • 7:    Σ ^ h f ( : , : , i ) = diag ( H λ , 1 2 ( σ ) )
  • 8: end for
  • 9:  U = ifft ( U ^ , 3 ) , H 1 2 ( Σ t ) = ifft ( Σ ^ h f , 3 ) , V = ifft ( V ^ , 3 )
  • 10:  Z h t = U H 1 2 ( Σ t ) V T
Q Subproblem: The update expression for Q is given by
Q [ j + 1 ] = arg min Q λ 1 k = 1 n 3 M Q ( k ) 1 2 1 2 + G 2 , Z Q + μ 2 Z Q F 2
Above equation can be decomposed into n 3 expressions and the kth frontal slice of Q can be updated by
Q ( k ) [ j + 1 ] = arg min Q λ 1 M Q ( k ) 1 2 1 2 + μ [ j ] 2 Q Z ( k ) [ j ] + G 2 ( k ) [ j ] μ [ j ] F 2
where Q ( k ) [ j + 1 ] is the k t h frontal slice/matrix of Q . The solution to the above subproblem is given by the halfthresholding operator [42],
Q ( k ) [ j + 1 ] = H λ 1 M μ i Z ( k ) [ j ] + G 2 ( k ) [ j ] μ [ j ]
where H λ 1 M μ i is the halfthresholding operator [39]. Here, Q m , n ( k ) is the ( m , n ) th element of kth frontal slice/matrix of Q .
Z Subproblem: The subproblem for updating Z is given by
Z [ j + 1 ] = arg min Z λ 2 R R Z F 2 + G 1 [ j ] , Z C [ j + 1 ] + μ [ j ] 2 Z C [ j + 1 ] F 2 + μ [ j ] 2 Z Q [ j + 1 ] F 2 + G 2 [ j ] , Z Q [ j + 1 ]
Above equation can be simplified as,
Z [ j + 1 ] = arg min Z λ 2 R R Z F 2 + μ [ j ] 2 Z C [ j + 1 ] F 2 + Z Q [ j + 1 ] F 2
Finding the Fourier transform on both sides, the above equation can be rewritten as,
Z ^ [ j + 1 ] = arg min Z ^ λ 2 R ^ R ^ Z ^ F 2 + μ [ j ] 2 Z ^ P ^ 1 [ j + 1 ] F 2 + Z ^ P ^ 2 [ j + 1 ] F 2
where Z ^ , P ^ 1 [ j + 1 ] and P ^ 2 [ j + 1 ] are the Fourier transforms of kth frontal slice of Z , C [ j + 1 ] G 1 [ j ] μ [ j ] and Q [ j + 1 ] G 2 [ j ] μ [ j ] , respectively, and indicates the slicewise multiplication [19]. The analytic solution for the update of the kth frontal slice is given by
Z ^ ( k ) [ j + 1 ] = 2 λ 2 R ^ ( k ) T R ^ ( k ) + μ [ j ] P ^ 1 ( k ) [ j + 1 ] + P ^ 2 ( k ) [ j + 1 ] 2 λ 2 R ^ ( k ) T R ^ ( k ) + 2 μ [ j ] I 1
T Subproblem: In T subproblem, the update expression is given by,
T [ j + 1 ] = arg min T λ 3 k = 1 N T ( k ) * + G 4 , X ( k ) T ( k ) S ( k ) + μ 2 X ( k ) T ( k ) S ( k ) F 2
Above expression can be considered an N subproblems. Then, the update expression for kth slice is given by,
T ( k ) [ j + 1 ] = arg min T λ 3 i = 1 N T ( k ) [ j ] * + μ [ j ] T ( k ) [ j ] X ( k ) [ j ] S ( k ) [ j ] + G 4 ( k ) [ j ] μ [ j ]
Above expression can be solved using singular value thresholding,
T ( k ) [ j + 1 ] = S λ 3 μ [ j ] X ( k ) [ j ] S ( k ) [ j ] + G 4 ( k ) [ j ] μ [ j ]
S λ 3 μ [ j ] [ . ] is the singular value thresholding operator [48].
S Subproblem: Similarly, the update expression for S subproblem is given by
S [ j + 1 ] = arg min T λ 4 i = 1 N S ( k ) 1 + G 4 , X ( k ) T ( k ) S ( k ) + μ 2 X ( k ) T ( k ) S ( k ) F 2
Solution for the kth slice is given by,
S ( k ) [ j + 1 ] = s λ 4 μ [ j ] X ( k ) [ j ] T ( k ) [ j + 1 ] + G 4 ( k ) [ j ] μ [ j ]
where s λ 4 μ [ j ] [ . ] is the shrinkage operator defined in [27] and the expression is given by, s θ > 0 ( x ) = s i g n x m a x | x | θ , 0 , where θ represents the threshold value.
R Subproblem:
R [ j + 1 ] = arg min R λ 2 R R Z F 2 + G 3 , R T + μ 2 R T F 2
Solution for the above expression is given by,
R [ j + 1 ] = 2 λ 2 Z [ j + 1 ] 2 λ 2 Z [ j + 1 ] × Z T [ j + 1 ] μ [ j ] I 1 G 3 [ j ] + μ [ j ] T [ j + 1 ]
Finally, the stopping criterion is measured by the following condition,
m a x Z [ j + 1 ] C [ j ] , Z [ j + 1 ] Q [ j ] , Z [ j + 1 ] Z [ j ] C [ j + 1 ] C [ j ] , Q [ j + 1 ] Q [ j ] , T [ j + 1 ] T [ j ] < ϵ
The overall algorithm can be summarized in Algorithm 2.
Algorithm 2 Robust tensor-based submodule clustering for noisy imaging data.
  • Require: Data: X R n 1 × N × n 3 and parameters λ 1 , λ 2 , λ 3 , λ 4 , μ m a x , ρ
  • Ensure: Z R N × N × n 3 , T R n 1 × n 3 × N
  • 1:  C [ 0 ] = Q [ 0 ] = Z [ 0 ] = G 1 [ 0 ] = G 2 [ 0 ] 0 R N × N × n 3
  • 2:  T [ 0 ] = S [ 0 ] = G 4 [ 0 ] 0 R n 1 × n 3 × N
  • 3:  R [ 0 ] = G 3 [ 0 ] 0 R n 1 × N × n 3 and t 0
  • 4:  λ 1 > 0 , λ 2 > 0 , λ 3 > 0 , λ 4 > 0 , μ [ 0 ] > 0 , ρ > 0
  • 5: while not converged do
  • 6:    C [ j + 1 ] ← Update using Equation (16)
  • 7:    Q [ j + 1 ] ← Update using Equation (18)
  • 8:    Z [ j + 1 ] ← Update using Equation (23)
  • 9:    T [ j + 1 ] ← Update using Equation (24)
  • 10:    S [ j + 1 ] ← Update using Equation (28)
  • 11:    R [ j + 1 ] ← Update using Equation (30)
  • 12:    G 1 [ j + 1 ] = G 1 [ j ] + μ [ j ] Z C
  • 13:    G 2 [ j + 1 ] = G 2 [ j ] + μ [ j ] Z Q
  • 14:    G 3 [ j + 1 ] = G 3 [ j ] + μ [ j ] R T
  • 15:    G 4 [ j + 1 ] = G 4 [ j ] + μ [ j ] k = 1 N X ( k ) T ( k ) S ( k )
  • 16:    μ [ j + 1 ] = ρ μ [ j ]
  • 17:   Check the convergence using Equation (31)
  • 18:    [ j ] [ j + 1 ]
  • 19: end while

4. Results and Discussions

The performance of the proposed method is evaluated on Coil20 http://www.cs.columbia.edu/CAVE/software/softlib/coil-20.php (accessed on 5 June 2021), MNIST http://yann.lecun.com/exdb/mnist/ (accessed on 5 June 2021) and UCSD http://www.svcl.ucsd.edu/projects/anomaly/dataset.htm (accessed on 8 June 2021) datasets [19]. These datasets are widely used for clustering, completion, noise reduction, and moving object detection problems [6,19,42]. The dimensions, number of classes and total number of images of these datasets have already been defined in various papers and can be found in [6,24] and so on. For comparison with the proposed method, other recent clustering methods, such as normalized spectral clustering [16], SSmC [20], SLRSmC [21], SCLRSmC [19], weighted tensor nuclear norm (WTNN) minimization [23] and re-weighted low rank tensor approximation and l 1 2 regularization, named ( RLRTA l 1 2 R ) [24], approaches are chosen. All the experiments are implemented and run on a personal computer with i5 - 4590 CPU at 3.30 GHz and 8 GB of RAM. The results are compared using the standard evaluation metrics such as the misclustering rate (MCR), adjusted Rand index (ARI), normalized mutual information (NMI) and purity [6]. The definitions and expressions of MCR, ARI and purity can be found in [6,24]. Then, normalized mutual information (NMI) is obtained by normalizing the mutual information to a value between 0 and 1, where the value of 1 indicates perfect labeling. In addition, purity and ARI are upper bound measures whose values lie in the interval ( 0 , 1 ) [24]. For those metrics, higher values indicate sound performance. In this work, the clustering results for MCR are represented by ( m ± σ ) % , where m is the mean, σ is the standard deviation and smaller MCR value indicates improved performance [6]. To simulate sparse noise in the data, we create an algorithm which generates noise values at random locations in the images. The amount of sparse noise applied to the data can be modified by this algorithm, and the amount of sparse noise added is shown as a percentage of the total pixels in the images of each dataset. In this work, the amount of sparse noise are varied from 5 % to 50 % for all the datasets.
We first present the experimental results obtained from the Coil20 dataset. For the Coil20 dataset, the values chosen for the regularization parameters are λ 3 = 1.155 and λ 4 = 0.355 . The proposed method exhibits a major improvement in its clustering efficiency, and furthermore improved evaluation metrics are obtained. The reason is that the proposed algorithm decomposes each image in the dataset into its sparse and low rank part. The sparse part represents the noise encountered, and the algorithm removes this sparse noise content. Consequently, the imaging data are free from noise and clean data are available for clustering at the same time. Figure 3 shows the visual appearance of the simultaneous noise removal of a single image from the coil20 dataset with 20 % sparse noise applied. The eliminated noise content from the noisy image is presented in Figure 3c, and the recovered clean image is shown in Figure 3d, respectively.
The proposed method is compared against state-of-the art clustering algorithms. The MCR values obtained using the Coil20 dataset for proposed method and other algorithms are summarized in the first section of Table 1, with the best values shown in bold. Similarly, the compared results of purity, NMI, and ARI metrics using Coil20 dataset are shown in Figure 5a, Figure 5b and Figure 5c, respectively. Our method obtains better values of MCR, purity, NMI and ARI metrics, compared to the state-of-the-art methods. For 10 % to 20 % of sparse noise content, the MCR values of the proposed method are ( 0.67 ± 1.08 ) % and ( 2.56 ± 4.42 ) % (second and third row, last column of Table 1), and these values are extremely small, compared to other algorithms. Similarly, purity and ARI values of the proposed method for 20 % of sparse noise content are 0.965 , 0.922 and 0.912 , respectively (from Figure 5). Hence, it is evident from Table 1 and Figure 5 that the proposed method outperforms other state-of-the-art clustering algorithms. The evaluation metrics of the proposed method indicate small decrements for noise values exceeding 35 % and more, but the values are still better than its counterparts. Further, the noise-removed images achieved by the proposed method using the Coil20 dataset for various levels of sparse noise content are shown in Figure 4. The eliminated sparse noise content from the noisy images in the dataset are clearly illustrated in the third row of Figure 4.
Second, experiments are conducted on the UCSD dataset, and the obtained MCR values are provided in the second section of Table 1. For lower noise content ( 5 % to 20 % ), the proposed approach achieves smaller MCR values as reported in Table 1. For noise values such as 30 % and 40 % , the MCR values for the proposed method are ( 10.62 ± 9.44 ) % and ( 14.05 ± 8.92 ) % . In the same scenario, algorithms such as SSmC, SLRSmC and SCLRSmC fail to achieve good clustering results (Table 1, Figure 5d–f). WTNN shows improved results over the SSmC, SLRSmC, SCLRSmC and spectral methods for lower noise values, but its performance reduces when the noise content in the imaging data is increased. Among the methods we have compared, the R L R T A l 1 2 R method shows the second best performance. The MCR metrics of this method are ( 0.75 ± 1.96 ) % to ( 15.33 ± 10.90 ) % for noise levels up to 30 % . In all of the scenarios considered, the proposed approach outperforms the state-of-the-art methods significantly. In addition, l 1 2 norm regularization effectively captures the f-block diagonal structure in a better way, and the self-expressiveness property of the submodules is preserved in the proposed method. A few images from the UCSD dataset, which was recovered by the proposed method under various noise levels, are shown in Figure 4. The proposed method’s efficiency is also checked using the MNIST dataset [19]. the MNIST dataset comprises images of handwritten digits from 0 to 9 with a resolution of 28 × 28 , where the number of images that belong to class is set as 30. The MCR metrics obtained for the proposed as well as the compared methods are summarized in last section of Table 1. The proposed method produces better clustering results than the state-of-the-art methods.
To summarize, the proposed approach performs well and provides good clustering results, even with noise-corrupted data for these three datasets. Furthermore, it outperforms all of the clustering algorithms that we compare in this work throughout every case. The reasons for the improved performance are as follows: first, the proposed algorithm is a unified optimization framework that clusters imaging data while also eliminating noise from images. Furthermore, l 1 2 induced TNN, incorporated into the proposed method, provides better low rankness and maintains the self-expressiveness property of submodules. Second, the proposed method’s optimization equation demonstrates the benefit of l 1 2 regularization in providing better submodule identification. Finally, the simultaneous noise removal reduces the impact of noise on the clustering performance and provides clean data for further clustering. No other methods in the state of the art have a simultaneous noise reduction scheme in their optimization problem for extracting the noise content from individual images. In most of the approaches, a global error term is used in their optimization problems to reduce the effect of noise when clustering. However, it is just a partial solution that cannot be applied to all cases. On the other hand, the proposed method handles individual images in the dataset and removes the noise content simultaneously, which is the major contribution of the proposed method.

4.1. Analysis of the Proposed Method with Gaussian Noise and Salt and Pepper Noise

In order to further analyze the robustness of the proposed system, experiments were conducted on imaging data that were distorted by different types of noise [49,50]. For this study, we used Coil20 and UCSD datasets, and two cases were considered. In case I, images that are corrupted by salt and pepper noise were considered. Salt and pepper noise is a type of impulse noise, where the noise values include two extreme ranges of a pixel value [28]. In one study, an impulse noise removal algorithm based on the sparse and low rank decomposition method was proposed in which impulse noise types were modeled as sparse components and the underlying image was restored with keeping the original features [28]. In this work, salt and pepper noise of various noise densities (d) were added. The noise density level considered are d = 0.03 to 0.3 . In the presence of salt and pepper noise, it was observed that the proposed method provides improved clustering performance as well as restoring the clean image with removed noise content. To substantiate, a few of the recovered images from the UCSD dataset are displayed in Figure 6. In this, the first row represents the original images, the second row denotes the noise corrupted images of various noise densities and the last row represents the recovered images. Further, the MCR metrics under salt and pepper noise are presented in Case I of Table 2. Similarly, purity and ARI metrics of our method and the compared methods for UCSD dataset are shown in Figure 7a and Figure 7b, respectively. In all cases, the proposed method outperforms the state-of-the-art algorithms that we compared.
In case II, images corrupted by Gaussian noise were used. Gaussian noise in images is most common when the lighting is low or the temperature is high [27]. This can happen at any time during the capture or transmission process. In analysis, the noise variances considered are σ n 2 = 0.005 , σ n 2 = 0.01 , σ n 2 = 0.02 , σ n 2 = 0.03 , σ n 2 = 0.05 , σ n 2 = 0.07 and σ n 2 = 0.1 . Case II in Table 2 summarizes the MCR metrics of the proposed method and the compared methods. Figure 7c,d shows the compared results of purity and ARI metrics, respectively. The obtained images of the proposed method under Gaussian noise are shown in the last four columns of Figure 6. For noise variances up to σ n 2 = 0.03 , our method successfully recovers the noise-free images, but for noise variance values of σ n 2 = 0.05 or more, the recovered images have an over-smoothing problem. However, by fine-tuning the regularization parameters, this issue can be mitigated to a certain extent. Nonetheless, the compared methods generate inadequate clustering performance under the same scenarios. The second-best performance is achieved by the R L R T A l 1 2 R approach. Then, WTNN performs reasonably to an extent for lower noise variances, but the results are deteriorated for higher noise values. In comparison to the above methods, the other methods do not perform as well. Hence, in the presence of Gaussian noise at different noise levels, the proposed method performs well and outperforms state-of-the-art clustering methods in general.

4.2. Parameter Tuning and Convergence Analysis

The sensitivity analysis of all regularization parameters against the evolution metrics NMI and ARI, and the convergence of the proposed algorithm are discussed in this section. The parameters λ 3 and λ 4 are tuned manually to obtain the best results from the range of (0.2–2.1). Figure 8b,c illustrates graphs of the two metrics, NMI and ARI, with these regularization parameters. The graphs show that the proposed method provides good evaluation scores for λ 3 within the range of 0.70–1.5 and λ 4 in 0.25–0.95. The optimal values for the proposed method are found to be λ 3 = 1.15 and λ 4 = 0.65 within this range. However, to obtain good evaluation scores and visual quality when using different datasets, minor variations can be allowed in these values. Similarly, the parameters λ 1 and λ 2 balance the effect the submodule structure constraint term and representation error term, respectively. The optimal values for λ 1 and λ 2 are identified as λ 1 = 4.5 × 10 3 and λ 2 = 7.5 × 10 3 . The sensitivity analysis of λ 1 and λ 2 with respect to the ARI metric is shown in Figure 8a.
Similarly, the proposed algorithm has a high convergence rate and converges quickly within 10–20 iterations. The proposed method’s convergence curves with the metrics NMI and MCR (with mean value m) are plotted in Figure 9a,b, respectively. The plots show that as the number of iterations increases, the change in NMI and MCR values converges to zero. In addition, Table 3 shows the execution time required for the proposed method, compared to the existing methods. Since the proposed algorithm employs l 1 2 regularization and l 1 2 induced T N N , their solutions are to be computed iteratively. Furthermore, the proposed method employs nuclear and l 1 norm minimization for the simultaneous noise removal of every image in the dataset. Therefore, an additional reasonable amount of time is consumed by the proposed method. This marginal increase in computational time can, however, be offset by the use of high-performance computing stations. In addition, the proposed method has six subproblems and four multipliers to update, as presented in Algorithm 2. In the proposed method, l 1 2 induced TNN, nuclear and l 1 norm minimization need more computational requirements. T R n 1 × n 3 × N update involves nuclear norm minimization on each slice, which requires O ( 1 2 N n 1 n 3 2 ) operations. In S update, where S R n 1 × n 3 × N , it requires O ( N n 1 n 3 ) operations. Finally, C R N × N × n 3 update requires O ( N 2 n 3 l o g 2 n 3 + 1 2 n 3 N 2 ) operations. Therefore, the total computational complexity of the proposed method is given by O ( T ( N 2 n 3 l o g 2 n 3 + 1 2 n 3 N 2 + 1 2 N n 1 n 3 2 + N n 1 n 3 ) ) operations, where T represents the number of iterations. Hence, the proposed method offers moderate computational complexity.

5. Conclusions

This paper proposes a robust tensor-based low rank submodule clustering technique for 2D imaging data with enhanced clustering capability. Traditional clustering methods treat images as vectors, but the proposed method treats them as lateral slices of a third order tensor, which aids in preserving the spatial information of the imaging data. The proposed optimization problem incorporates l 1 2 induced TNN and l 1 2 regularization, which facilitates achieving a more accurate low tensor rank approximation and submodule segmentation. Unlike existing clustering techniques, the proposed method incorporates a simultaneous noise reduction scheme by applying the principles of sparse and low rank decomposition techniques to each individual noise corrupted image in the dataset. Afterwards, the noise content is removed, and the underlying clean images are provided for further clustering. The proposed method is compared to state-of-the-art clustering algorithms. The experimental results show that the proposed method outperforms existing state-of-the-art clustering algorithms in terms of NMI, MCR and purity metrics.

Author Contributions

Conceptualization, J.F. and S.N.G.; methodology, J.F., B.M., S.N.G. and S.G.; Formal analysis, S.N.G. and S.G.; writing-original draft, J.F., S.G.; writing-review & editing, J.F., B.M.; validation and visualization, S.N.G., S.G.; supervision, S.N.G. and S.G. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The datasets during the current study are publicly available and can be obtained from the below mentioned public domain resources at: http://www.cs.columbia.edu/CAVE/software/softlib/coil-20.php (accessed on 5 June 2021), http://yann.lecun.com/exdb/mnist/ (accessed on 5 June 2021) and http://www.svcl.ucsd.edu/projects/anomaly/dataset.htm (accessed on 8 June 2021).

Conflicts of Interest

Authors declare that this work is original, is not been fully or partly published before and is not currently being considered for publication elsewhere and they also confirm that there are no known conflicts of interest associated with this publication.

References

  1. Chen, Z.; Ding, S.; Hou, H. A novel self-attention deep subspace clustering. Int. J. Mach. Learn. Cybern. 2021, 12, 2377–2387. [Google Scholar] [CrossRef]
  2. Dubes, R.; Jain, A.K. Clustering methodologies in exploratory data analysis. Adv. Comput. 1980, 19, 113–228. [Google Scholar]
  3. Saxena, A.; Prasad, M.; Gupta, A.; Bharill, N.; Patel, O.P.; Tiwari, A.; Er, M.J.; Ding, W.; Lin, C.T. A review of clustering techniques and developments. Neurocomputing 2017, 267, 664–681. [Google Scholar] [CrossRef] [Green Version]
  4. Dubes, R.C.; Jain, A.K. Algorithms for Clustering Data; Taylor & Francis: London, UK, 1988. [Google Scholar]
  5. Duda, R.O.; Hart, P.E.; Stork, D.G. Pattern Classification; John Wiley & Sons: Hoboken, NJ, USA, 2012. [Google Scholar]
  6. Francis, J.; George, S.N. A Unified Tensor Framework for Clustering and Simultaneous Reconstruction of Incomplete Imaging Data. ACM Trans. Multimed. Comput. Commun. Appl. TOMM 2020, 16, 1–24. [Google Scholar] [CrossRef]
  7. Parsons, L.; Haque, E.; Liu, H. Subspace clustering for high dimensional data: A review. ACM Sigkdd Explor. Newsl. 2004, 6, 90–105. [Google Scholar] [CrossRef]
  8. Wang, L.; Huang, J.; Yin, M.; Cai, R.; Hao, Z. Block diagonal representation learning for robust subspace clustering. Inf. Sci. 2020, 526, 54–67. [Google Scholar] [CrossRef]
  9. Zhang, H.; Zhai, H.; Zhang, L.; Li, P. Spectral–spatial sparse subspace clustering for hyperspectral remote sensing images. IEEE Trans. Geosci. Remote Sens. 2016, 54, 3672–3684. [Google Scholar] [CrossRef]
  10. Elhamifar, E.; Vidal, R. Sparse subspace clustering: Algorithm, theory, and applications. IEEE Trans. Pattern Anal. Mach. Intell. 2013, 35, 2765–2781. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  11. Abavisani, M.; Patel, V.M. Multimodal sparse and low-rank subspace clustering. Inf. Fusion 2018, 39, 168–177. [Google Scholar] [CrossRef]
  12. Yang, J.; Liang, J.; Wang, K.; Rosin, P.L.; Yang, M.H. Subspace Clustering via Good Neighbors. IEEE Trans. Pattern Anal. Mach. Intell. 2020, 42, 1537–1544. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  13. Vidal, R. Subspace clustering. IEEE Signal Process. Mag. 2011, 28, 52–68. [Google Scholar] [CrossRef]
  14. Tang, K.; Su, Z.; Jiang, W.; Zhang, J.; Sun, X.; Luo, X. Robust subspace learning-based low-rank representation for manifold clustering. Neural Comput. Appl. 2019, 31, 7921–7933. [Google Scholar] [CrossRef]
  15. Von Luxburg, U. A tutorial on spectral clustering. Stat. Comput. 2007, 17, 395–416. [Google Scholar] [CrossRef]
  16. Shi, J.; Malik, J. Normalized cuts and image segmentation. IEEE Trans. Pattern Anal. Mach. Intell. 2000, 22, 888–905. [Google Scholar]
  17. Ng, A.Y.; Jordan, M.I.; Weiss, Y. On spectral clustering: Analysis and an algorithm. In Advances in Neural Information Processing Systems; The MIT Press: Cambridge, MA, USA, 2002; pp. 849–856. [Google Scholar]
  18. Liu, G.; Lin, Z.; Yu, Y. Robust subspace segmentation by low-rank representation. ICML 2010, 1, 8. [Google Scholar]
  19. Wu, T.; Bajwa, W.U. A Low Tensor-Rank Representation Approach for Clustering of Imaging Data. IEEE Signal Process. Lett. 2018, 25, 1196–1200. [Google Scholar] [CrossRef]
  20. Kernfeld, E.; Aeron, S.; Kilmer, M. Clustering multi-way data: A novel algebraic approach. arXiv 2014, arXiv:1412.7056. [Google Scholar]
  21. Piao, X.; Hu, Y.; Gao, J.; Sun, Y.; Lin, Z.; Yin, B. Tensor sparse and low-rank based submodule clustering method for multi-way data. arXiv 2016, arXiv:1601.00149. [Google Scholar]
  22. Zhou, X.; Yang, C.; Zhao, H.; Yu, W. Low-rank modeling and its applications in image analysis. ACM Comput. Surv. CsUR 2014, 47, 1–33. [Google Scholar] [CrossRef] [Green Version]
  23. Johnson, A.; Francis, J.; Madathil, B.; George, S.N. A two-way optimization framework for clustering of images using weighted tensor nuclear norm approximation. In Proceedings of the 2020 National Conference on Communications (NCC), Kharagpur, India, 21–23 February 2020; IEEE: Piscataway, NJ, USA, 2020; pp. 1–5. [Google Scholar]
  24. Madathil, B.; George, S.N. Noise robust image clustering based on reweighted low rank tensor approximation and l 1 2 regularization. Signal Image Video Process. 2020, 15, 341–349. [Google Scholar] [CrossRef]
  25. Xia, W.; Zhang, X.; Gao, Q.; Shu, X.; Han, J.; Gao, X. Multiview Subspace Clustering by an Enhanced Tensor Nuclear Norm. IEEE Trans. Cybern. 2021, 1–14. [Google Scholar] [CrossRef]
  26. Wu, T. Graph regularized low-rank representation for submodule clustering. Pattern Recognit. 2020, 100, 107145. [Google Scholar] [CrossRef]
  27. Shijila, B.; Tom, A.J.; George, S.N. Simultaneous denoising and moving object detection using low rank approximation. Future Gener. Comput. Syst. 2019, 90, 198–210. [Google Scholar]
  28. Jin, K.H.; Ye, J.C. Sparse and low-rank decomposition of a Hankel structured matrix for impulse noise removal. IEEE Trans. Image Process. 2017, 27, 1448–1461. [Google Scholar] [CrossRef] [PubMed]
  29. Cao, C.; Yu, J.; Zhou, C.; Hu, K.; Xiao, F.; Gao, X. Hyperspectral image denoising via subspace-based nonlocal low-rank and sparse factorization. IEEE J. Sel. Top. Appl. Earth Obs. Remote. Sens. 2019, 12, 973–988. [Google Scholar] [CrossRef]
  30. Baburaj, M.; George, S.N. Tensor based approach for inpainting of video containing sparse text. Multimed. Tools Appl. 2019, 78, 1805–1829. [Google Scholar]
  31. Li, L.; Li, W.; Du, Q.; Tao, R. Low-rank and sparse decomposition with mixture of gaussian for hyperspectral anomaly detection. IEEE Trans. Cybern. 2020, 51, 4363–4372. [Google Scholar] [CrossRef]
  32. Guo, Q.; Zhang, C.; Zhang, Y.; Liu, H. An efficient SVD-based method for image denoising. IEEE Trans. Circuits Syst. Video Technol. 2015, 26, 868–880. [Google Scholar] [CrossRef]
  33. Goldfarb, D.; Qin, Z. Robust low-rank tensor recovery: Models and algorithms. SIAM J. Matrix Anal. Appl. 2014, 35, 225–253. [Google Scholar] [CrossRef] [Green Version]
  34. Fan, F.; Ma, Y.; Li, C.; Mei, X.; Huang, J.; Ma, J. Hyperspectral image denoising with superpixel segmentation and low-rank representation. Inf. Sci. 2017, 397, 48–68. [Google Scholar] [CrossRef]
  35. Yin, M.; Gao, J.; Xie, S.; Guo, Y. Multiview subspace clustering via tensorial t-product representation. IEEE Trans. Neural Networks Learn. Syst. 2018, 30, 851–864. [Google Scholar] [CrossRef]
  36. Zhang, Z.; Ely, G.; Aeron, S.; Hao, N.; Kilmer, M. Novel methods for multilinear data completion and de-noising based on tensor-SVD. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Columbus, OH, USA, 23–28 June 2014; pp. 3842–3849. [Google Scholar]
  37. Tang, K.; Liu, R.; Su, Z.; Zhang, J. Structure-constrained low-rank representation. IEEE Trans. Neural Netw. Learn. Syst. 2014, 25, 2167–2179. [Google Scholar] [CrossRef] [PubMed]
  38. Wei, L.; Wang, X.; Wu, A.; Zhou, R.; Zhu, C. Robust subspace segmentation by self-representation constrained low-rank representation. Neural Process. Lett. 2018, 48, 1671–1691. [Google Scholar] [CrossRef]
  39. Xu, Z.; Chang, X.; Xu, F.; Zhang, H. L_{1/2} regularization: A thresholding representation theory and a fast solver. IEEE Trans. Neural Networks Learn. Syst. 2012, 23, 1013–1027. [Google Scholar]
  40. Candes, E.J.; Wakin, M.B.; Boyd, S.P. Enhancing sparsity by reweighted l1 minimization. J. Fourier Anal. Appl. 2008, 14, 877–905. [Google Scholar] [CrossRef]
  41. Zhang, T. Analysis of multi-stage convex relaxation for sparse regularization. J. Mach. Learn. Res. 2010, 11, 1081–1107. [Google Scholar]
  42. Tom, A.J.; George, S.N. A Three-Way Optimization Technique for Noise Robust Moving Object Detection Using Tensor Low-Rank Approximation, l1/2, and TTV Regularizations. IEEE Trans. Cybern. 2019, 51, 1004–1014. [Google Scholar] [CrossRef] [PubMed]
  43. Zeng, J.; Lin, S.; Wang, Y.; Xu, Z. l_{1/2} regularization: Convergence of iterative half thresholding algorithm. IEEE Trans. Signal Process. 2014, 62, 2317–2329. [Google Scholar] [CrossRef] [Green Version]
  44. Ghimpeţeanu, G.; Batard, T.; Bertalmío, M.; Levine, S. A decomposition framework for image denoising algorithms. IEEE Trans. Image Process. 2015, 25, 388–399. [Google Scholar] [CrossRef]
  45. Li, H.; He, X.; Yu, Z.; Luo, J. Noise-robust image fusion with low-rank sparse decomposition guided by external patch prior. Inf. Sci. 2020, 523, 14–37. [Google Scholar] [CrossRef]
  46. Hu, W.; Tao, D.; Zhang, W.; Xie, Y.; Yang, Y. The Twist Tensor Nuclear Norm for Video Completion. IEEE Trans. Neural Networks Learn. Syst. 2017, 28, 2961–2973. [Google Scholar] [CrossRef] [PubMed]
  47. Nocedal, J.; Wright, S. Numerical Optimization; Springer Science & Business Media: Berlin/Heidelberg, Germany, 2006. [Google Scholar]
  48. Cai, J.F.; Candès, E.J.; Shen, Z. A singular value thresholding algorithm for matrix completion. SIAM J. Optim. 2010, 20, 1956–1982. [Google Scholar] [CrossRef]
  49. Zeng, H.; Xie, X.; Cui, H.; Yin, H.; Ning, J. Hyperspectral Image Restoration via Global L1-2 Spatial–Spectral Total Variation Regularized Local Low-Rank Tensor Recovery. IEEE Trans. Geosci. Remote Sens. 2021, 59, 3309–3325. [Google Scholar] [CrossRef]
  50. Sheng, J.; Lv, G.; Xue, Z.; Wu, L.; Feng, Q. Mixed Noise Removal by Bilateral Weighted Sparse Representation. Circuits Syst. Signal Process. 2021, 40, 4490–4515. [Google Scholar] [CrossRef]
Figure 1. Self-expressiveness property of free submodules. Red fibers represent non-zero fibers and greyish fibers represent zero value fibers. Non-zero fibers represent coefficients from intra-cluster. Zero fibers denote coefficients from inter-clusters.
Figure 1. Self-expressiveness property of free submodules. Red fibers represent non-zero fibers and greyish fibers represent zero value fibers. Non-zero fibers represent coefficients from intra-cluster. Zero fibers denote coefficients from inter-clusters.
Jimaging 07 00279 g001
Figure 2. Unit ball representation of (a) l norm (b) l 2 norm (c) l 1 norm (d) l 1 2 norm and (e) l 0 norm, in the three dimensional space R 3 .
Figure 2. Unit ball representation of (a) l norm (b) l 2 norm (c) l 1 norm (d) l 1 2 norm and (e) l 0 norm, in the three dimensional space R 3 .
Jimaging 07 00279 g002
Figure 3. Illustration of noise removal of a single image from Coil20 dataset. (a) Input image (b) Image with 20 % sparse noise (c) Sparse noise content (d) Noise removed image.
Figure 3. Illustration of noise removal of a single image from Coil20 dataset. (a) Input image (b) Image with 20 % sparse noise (c) Sparse noise content (d) Noise removed image.
Jimaging 07 00279 g003
Figure 4. Illustrated noise-removed images achieved using proposed method from UCSD dataset (ag) and Coil20 dataset (hk) for various levels of sparse noise. The sparse noise levels are indicated on the top of each input image. First row: original input image, second row: images that have been corrupted by various levels of sparse noise, third row: eliminated sparse noise content from each image, fourth row: noise-removed images.
Figure 4. Illustrated noise-removed images achieved using proposed method from UCSD dataset (ag) and Coil20 dataset (hk) for various levels of sparse noise. The sparse noise levels are indicated on the top of each input image. First row: original input image, second row: images that have been corrupted by various levels of sparse noise, third row: eliminated sparse noise content from each image, fourth row: noise-removed images.
Jimaging 07 00279 g004
Figure 5. Quantitative comparison of purity, NMI and ARI metrics of the proposed method and state-of-the art algorithms under various levels of sparse noise using Coil20 and UCSD datasets.
Figure 5. Quantitative comparison of purity, NMI and ARI metrics of the proposed method and state-of-the art algorithms under various levels of sparse noise using Coil20 and UCSD datasets.
Jimaging 07 00279 g005
Figure 6. Noise-removed images achieved using proposed method from UCSD and Coil20 datasets for various levels of salt and pepper and Gaussian noise. The noise levels of salt and pepper noise and Gaussian noise are indicated on the top of each input image. First row: original input image, second row: images that have been corrupted by various levels of salt and pepper and Gaussian noise, third row: noise-removed images.
Figure 6. Noise-removed images achieved using proposed method from UCSD and Coil20 datasets for various levels of salt and pepper and Gaussian noise. The noise levels of salt and pepper noise and Gaussian noise are indicated on the top of each input image. First row: original input image, second row: images that have been corrupted by various levels of salt and pepper and Gaussian noise, third row: noise-removed images.
Jimaging 07 00279 g006
Figure 7. Quantitative Comparison (a,b): purity metric and ARI metric for UCSD dataset for various levels of salt and pepper noise. (c,d): purity metric and ARI metric for Coil20 dataset for various levels of Gaussian noise. of purity and ARI metrics of the proposed method and state-of-the art algorithms under salt and pepper noise (d) and Gaussian noise ( σ n 2 ).
Figure 7. Quantitative Comparison (a,b): purity metric and ARI metric for UCSD dataset for various levels of salt and pepper noise. (c,d): purity metric and ARI metric for Coil20 dataset for various levels of Gaussian noise. of purity and ARI metrics of the proposed method and state-of-the art algorithms under salt and pepper noise (d) and Gaussian noise ( σ n 2 ).
Jimaging 07 00279 g007
Figure 8. Sensitivity analysis of the proposed method with the evaluation metrics NMI and ARI. (a) Sensitivity analysis of λ 1 and λ 2 with ARI metric using Coil20 dataset, (b) Sensitivity analysis of λ 3 and λ 4 with NMI metric using Coil20 dataset. (c) Sensitivity analysis of λ 3 and λ 4 with ARI metric using UCSD dataset.
Figure 8. Sensitivity analysis of the proposed method with the evaluation metrics NMI and ARI. (a) Sensitivity analysis of λ 1 and λ 2 with ARI metric using Coil20 dataset, (b) Sensitivity analysis of λ 3 and λ 4 with NMI metric using Coil20 dataset. (c) Sensitivity analysis of λ 3 and λ 4 with ARI metric using UCSD dataset.
Jimaging 07 00279 g008
Figure 9. Convergence analysis of the proposed method. (a) Convergence analysis of the proposed method with NMI metric using UCSD dataset, (b) Convergence analysis of the proposed method with MCR (m) metric using Coil20 dataset.
Figure 9. Convergence analysis of the proposed method. (a) Convergence analysis of the proposed method with NMI metric using UCSD dataset, (b) Convergence analysis of the proposed method with MCR (m) metric using Coil20 dataset.
Jimaging 07 00279 g009
Table 1. Table represents compared results of MCR ( m ± σ ) % for Coil20, UCSD and MNIST datasets. Best values are highlighted in bold letters.
Table 1. Table represents compared results of MCR ( m ± σ ) % for Coil20, UCSD and MNIST datasets. Best values are highlighted in bold letters.
DatasetNo. of
Clusters
Saprse
Noise (%)
Spectral [16]SSmC [20]SLRSmC [21]SCLRSmC [19]WTNN [23] R L R T A l 1 2 R [24]Proposed
Coil 20353.95 ± 5.056.55 ± 9.667.69 ± 4.443.13 ± 2.221.82 ± 2.120.75 ± 1.960.45± 1.14
109.23 ± 10.5310 ± 17.235.55 ± 9.425.05 ± 6.565.05 ± 5.253.26 ± 1.550.67 ± 1.08
1516.52 ± 14.4417.32 ± 12.639.65 ± 7.489.95 ± 4.465.95 ± 3.227.25 ± 4.622.15 ± 2.90
2021.23 ± 10.1220.32 ± 10.5515.55 ± 13.2610.36 ± 8.059.56 ± 7.259.05 ± 8.022.56 ± 4.42
3027.32 ± 25.0226.32 ± 18.9922.42 ± 17.5419.25 ± 12.3317.65 ± 14.2215.33 ± 10.9010.24 ± 11.23
4034.56 ± 16.4532.99 ± 12.5633.21 ± 18.1023.95 ± 16.5220.44 ± 17.4418.24 ± 14.6414.98 ± 10.15
5038.85 ± 23.2337.36 ± 20.3337.77 ± 10.1828.66 ± 16.1228.12 ± 19.3121.02 ± 12.7518.44 ± 11.22
UCSD354.04 ± 5.627.78 ± 13.473.33 ± 4.564.12 ± 6.525.25 ± 2.120.55 ± 1.220.22 ± 0.95
106.26 ± 9.339.97 ± 15.394.04 ± 5.094.96 ± 5.344.80 ± 6.231.65 ± 2.321.22 ± 2.54
1514.52 ± 12.2212.22 ± 21.168.4 ± 2.196.22 ± 8.925.23 ± 6.552.23 ± 3.853.04 ± 3.46
2019.52 ± 15.3516.66 ± 25.8614.4 ± 6.5512.27 ± 13.4611.02 ± 10.526.65 ± 5.985.25 ± 3.90
3024.52 ± 12.8517.66 ± 28.818.55 ± 22.1916.35 ± 15.5212.52 ± 12.0511.24 ± 9.2510.62 ± 9.44
4031.23 ± 26.2220.00 ± 26.4530.05 ± 11.5514.44 ± 25.0119.25 ± 20.0117.40 ± 10.5314.05 ± 8.92
5036.23 ± 21.2238.44 ± 22.1932.21 ± 10.2527.45 ± 18.2527.25 ± 12.2223.22 ± 6.5216.33 ± 7.58
MNIST357.25 ± 2.157.15 ± 7.717.51 ± 7.225.90 ± 5.345.55 ± 1.982.6 ± 1.351.49 ± 0.95
109.35 ± 6.3512.14 ± 11.0910.05 ± 8.099.25 ± 2.949.12 ± 2.557.65 ± 5.044.40 ± 3.11
1520.04 ± 9.4517.35 ± 10.1212.05 ± 9.0212.43 ± 5.3210.02 ± 5.379.56 ± 7.857.12 ± 4.67
2027.46 ± 9.1421.56 ± 10.4327.21 ± 9.1219.56 ± 12.2114.29 ± 7.3411.25 ± 6.047.85 ± 5.45
3025.92 ± 16.0525.00 ± 12.3621.73 ± 12.5822.93 ± 8.4619.23 ± 7.3718.63 ± 8.9815.22 ± 10.33
4043.86 ± 13.7126.80 ± 19.5538.09 ± 9.2529.19 ± 8.6529.23 ± 11.4525.52 ± 13.1421.35 ± 10.25
5044.14 ± 17.3946.83 ± 20.2445.50 ± 10.2135.20 ± 15.6735.45 ± 10.6531.25 ± 9.5828.45 ± 12.45
Table 2. Compared results of MCR ( m ± σ ) % metrics using Coil20 and UCSD datasets under various levels of salt and pepper and Gaussian noise. For salt and pepper noise, noise density d and for Gaussian noise, noise variance σ n 2 are added. Best values are highlighted in bold letters.
Table 2. Compared results of MCR ( m ± σ ) % metrics using Coil20 and UCSD datasets under various levels of salt and pepper and Gaussian noise. For salt and pepper noise, noise density d and for Gaussian noise, noise variance σ n 2 are added. Best values are highlighted in bold letters.
Noise
Levels
Spectral [16]SSmC [20]SLRSmC [21]SCLRSmC [19]WTNN [23] R L R T A l 1 2 R [24]Proposed
Case I: Salt and Pepper Noise (d)
UCSD0.034.07 ± 5.253.95 ± 1.953.25 ± 1.042.53 ± 1.322.04 ± 1.661.95 ± 1.450.55 ± 0.64
0.054.20 ± 7.334.41 ± 2.363.50 ± 1.683.85 ± 1.272.32 ± 2.52.45 ± 1.800.60 ± 1.15
0.17.88 ± 5.608.45 ± 3.665.50 ± 7.226.55 ± 4.855.25 ± 7.053.05 ± 2.451.11 ± 1.80
0.211.32 ± 4.2510.50 ± 7.4511.25 ± 9.389.28 ± 6.685.38 ± 2.166.20 ± 2.561.60 ± 1.55
0.313.20 ± 7.8511.75 ± 8.2931.25 ± 8.6712.12 ± 7.469.05 ± 10.128.75 ± 9.553.02 ± 4.54
Case II: Gaussian Noise ( σ n 2 )
Coil200.0053.11 ± 2.422.25 ± 3.563.04 ± 1.802.75 ± 1.042.5 ± 1.022.5 ± 1.351.13 ± 0.95
0.019.85 ± 2.853.80 ± 7.254.57 ± 7.643.56 ± 5.833.52 ± 5.563.05 ± 4.821.45 ± 2.09
0.0212.52 ± 9.5013.22 ± 9.4210.15 ± 10.509.22 ± 9.239.52 ± 7.455.52 ± 8.524.21 ± 6.52
0.0318.47 ± 9.3518.08 ± 6.2415.45 ± 9.5011.05 ± 7.1510.50 ± 5.538.25 ± 6.406.16 ± 5.05
0.0518.03 ± 11.5115.80 ± 10.6516.74 ± 14.4413.80 ± 7.4212.36 ± 9.279.54 ± 9.316.27 ± 5.38
0.1023.03 ± 11.5121.65 ± 10.1120.33 ± 12.4419.21 ± 17.5215.05 ± 11.2211.80 ± 10.029.03 ± 8.47
Table 3. Execution time comparison of algorithms.
Table 3. Execution time comparison of algorithms.
Execution Time Comparison (Sec)
Coil20MNISTUCSD
Spectral [16]8.76.59.1
SSmC [20]9.28.410.3
SLRSmC [21]14.913.717.5
SCLRSmC [19]17.816.721.4
WTNN [23]21.619.224.3
RLRTA l 1 2 R [24]20.514.221.8
Proposed26.522.129.5
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Francis, J.; Madathil, B.; George, S.N.; George, S. A Robust Tensor-Based Submodule Clustering for Imaging Data Using l12 Regularization and Simultaneous Noise Recovery via Sparse and Low Rank Decomposition Approach. J. Imaging 2021, 7, 279. https://doi.org/10.3390/jimaging7120279

AMA Style

Francis J, Madathil B, George SN, George S. A Robust Tensor-Based Submodule Clustering for Imaging Data Using l12 Regularization and Simultaneous Noise Recovery via Sparse and Low Rank Decomposition Approach. Journal of Imaging. 2021; 7(12):279. https://doi.org/10.3390/jimaging7120279

Chicago/Turabian Style

Francis, Jobin, Baburaj Madathil, Sudhish N. George, and Sony George. 2021. "A Robust Tensor-Based Submodule Clustering for Imaging Data Using l12 Regularization and Simultaneous Noise Recovery via Sparse and Low Rank Decomposition Approach" Journal of Imaging 7, no. 12: 279. https://doi.org/10.3390/jimaging7120279

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop