Next Article in Journal
New Solitary-Wave Solutions of the Van der Waals Normal Form for Granular Materials via New Auxiliary Equation Method
Next Article in Special Issue
High-Dimensional Statistics: Non-Parametric Generalized Functional Partially Linear Single-Index Model
Previous Article in Journal
An Epidemic Model with Time Delay Determined by the Disease Duration
Previous Article in Special Issue
Empirical Likelihood Ratio Tests for Homogeneity of Multiple Populations in the Presence of Auxiliary Information
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A New Clustering Method Based on the Inversion Formula

Department of Applied Mathematics, Faculty of Mathematics and Natural Sciences, Kaunas University of Technology, 44249 Kaunas, Lithuania
*
Author to whom correspondence should be addressed.
Mathematics 2022, 10(15), 2559; https://doi.org/10.3390/math10152559
Submission received: 21 June 2022 / Revised: 14 July 2022 / Accepted: 18 July 2022 / Published: 22 July 2022

Abstract

:
Data clustering is one area of data mining that falls into the data mining class of unsupervised learning. Cluster analysis divides data into different classes by discovering the internal structure of data set objects and their relationship. This paper presented a new density clustering method based on the modified inversion formula density estimation. This new method should allow one to improve the performance and robustness of the k-means, Gaussian mixture model, and other methods. The primary process of the proposed clustering algorithm consists of three main steps. Firstly, we initialized parameters and generated a T matrix. Secondly, we estimated the densities of each point and cluster. Third, we updated mean, sigma, and phi matrices. The new method based on the inversion formula works quite well with different datasets compared with K-means, Gaussian Mixture Model, and Bayesian Gaussian Mixture model. On the other hand, new methods have limitations because this one method in the current state cannot work with higher-dimensional data (d > 15). This will be solved in the future versions of the model, detailed further in future work. Additionally, based on the results, we can see that the MIDEv2 method works the best with generated data with outliers in all datasets (0.5%, 1%, 2%, 4% outliers). The interesting point is that a new method based on the inversion formula can cluster the data even if data do not have outliers; one of the most popular, for example, is the Iris dataset.

1. Introduction

Artificial intelligence was first mentioned in 1956, but it was not so widely applied for a long time. Artificial intelligence has been widely used in recent decades. The ever-increasing power of possible computations has driven the high availability, development, and applications of artificial intelligence. Data mining is one of the most critical areas, as it is not limited to business, manufacturing, or other services. For this reason, data research has attracted a large number of researchers. Data clustering is one area of data mining that falls into the class of data mining of unsupervised learning. Cluster analysis divides data into different classes by discovering the internal structure of data set objects and their relationship. Clustering aims to create groups of similar observations/elements. The most similar elements, in this case, will be in one cluster and different elements in separate clusters [1].
With the increasing application of data mining, cluster analysis of data is also being applied in many areas: pattern recognition [2,3], bioinformatics [4,5], environment sciences [6], feature selection [7,8], or to solve different healthcare tasks. Clustering algorithms can be used to detect various diseases [9]. For example, different clustering techniques are used to identify breast cancer [10], Parkinson’s disease [11,12], various psychological and psychiatric disorders [13], heart diseases and diabetes [14], and Alzheimer’s disease [15,16], among many others.
Although there are many clustering methods, this problem is being addressed and remains a complex issue. Different clustering methods often do not work well with all data sets, and different methods are very much needed. Although one of the most widely used algorithms currently used is the k-means, as these methods are fast-acting and work well with certain data sets, there are still a lot of possible improvements to increase this method’s accuracy.
Research focuses a lot on developing new density estimation procedures [17,18]. Moreover, in the last years, different scientists started to propose different robust density estimation methods even based on neural networks. There are different researches on this topic: Parzen neural networks [19], soft constrained neural networks [20], and others [21]. Some time ago, we presented a modified inversion formula for density estimation [22]. In this research, we found that this density estimation works better with different data than multiple density estimators. Therefore, we raised the hypothesis that modified inversion formula density estimation would be suitable for data clustering. Due to these facts, this paper aimed to present a new density clustering method based on the modified inversion formula density estimation. This new method should allow one to improve the performance and robustness of the k-means, Gaussian mixture model, and other methods. The main process of the proposed clustering algorithm consists of three main steps. Firstly, we initialized parameters and generated a T matrix. Secondly, we estimated the densities of each point and cluster. Third, we updated mean, sigma, and phi matrices. To compare results in this paper we used k-means, Gaussian mixture model (GMM), and Bayesian Gaussian mixture model (BGMM) clustering methods.
This paper is organized as follows. In the Section 2, we present introduction of the inversion formula and modified inversion formula density estimations and explain the idea behind these estimations. Then, the process of the proposed algorithm is presented in Section 2. In Section 3, we show empirical results with datasets used in the research, evaluation metrics, and experimental results. Finally, conclusions and future work with the new clustering method are given in Section 4

2. Estimation of the Density of the Modified Inversion Formula

Estimating probability density functions (pdf) is considered one of the most important parts of statistical modeling. This feature allows us to express random variables as a function of other variables while simultaneously allowing the detection of potentially hidden relationships in the data. If distribution density f(x) satisfies the equation
f ( x ) = k = 1 q p k f k ( x ) = f ( x , θ )
the random vector X Rd satisfies the distribution mixture model. The formula above (1), θ is a multi-dimensional parameter of the model. The function f k ( x ) is a function of the distribution density. X is a d-dimensional random vector with a distribution density f ( x ) . Additionally, we have independent copies of X (X (1),…, X (n)) (sample of X).
We say that the sample satisfies the mixture model if X (t) satisfies (1). We call the size n the sample size (volume). The parameter q is called the mixture number of components, and p k is the a priori probability. They meet the following conditions:
p k > 0 ,   k = 1 q p k = 1

2.1. Gaussian Mixture and Inversion Density Estimation

It is important to notice that Projection (3) of the observations of the Gaussian mixture (1) is also distributed by the (one-dimensional) Gaussian mixture model:
f τ ( x ) = k = 1 q p k , τ φ k , τ ( x ) = f τ ( x , θ τ )
here φ k , τ ( x ) = φ ( x ;   m k , τ ,   σ k , τ 2 ) —one-dimensional Gaussian density. Multivariate mixture parameter and data projection distribution parameters θ τ = ( p k , τ ,   m k , τ ,   σ k , τ 2 ) , k = 1 , , q links equality
p j , τ = p j m j , τ = τ M j σ j , τ 2 = τ R j τ
Using the inversion formula,
f ( x ) = 1 ( 2 π ) d R d e i t x ψ ( t ) d t
where ψ ( t ) = E e i t x denotes the characteristic function of the random variable X. First, the set of projections directions T is selected. Additionally, the characteristic function is changed using the following formula:
f ^ ( x ) = A ( d ) # T τ T 0 e i u τ x ψ ^ τ ( u ) u d 1 e h u 2 d u
where here and below, # denotes the number of elements in the set. With the formula for the volume of a d-dimensional sphere
V d ( R ) = π d 2 R d Γ ( d 2 + 1 )   =   { π d 2 R d ( d 2 ) ! , kai   d   mod   2 0 2 d + 1 2 π d 1 2 R d d ! ! , kai   d   mod   2 1
one can calculate the constant A(d) depending on the dimension of the data:
A ( d ) = ( V d ( 1 ) ) R ( 2 π ) d = d 2 d π d 2 Γ ( d 2 + 1 )
Simulation studies show that the density estimates of the inversion formula are discontinuous/rough. The multiplier e h u 2 in the Formula (6) further smoothes the estimate f ^ ( x ) with the Gaussian kernel function. It is worth noting that this form of the multiplier allows analytical calculation of the value of the integral. Furthermore, results from extended Monte Carlo studies have shown that using this multiplier reduces the estimation errors. Formula (6) can be used for various estimates of the characteristic function of the projected data. A parametric estimate of the characteristic function was used in the present case.
ψ ^ τ ( u ) = k = 1 q ^ τ p ^ k , τ e i u m ^ k , τ u 2 σ ^ k , τ 2 / 2
The chosen form of the smoothing multiplier e h u 2 allows us to relate the smoothing parameter h to the variances of the projection clusters.

2.2. Modified Inversion Density Estimation

It is worth noting that the Gaussian mixture Model (1) described by the estimate (where f k = φ k ) only estimates the density of the distribution close to it well. This can be seen as a drawback of the inversion formula method (9). The density estimation of the inversion formula often becomes complicated due to a large number of components with low a priori probability when the aim is to approximate the density under study with a mixture of Gaussian distributions. This problem can be solved by using a noise cluster.
We discuss a modified density estimation algorithm based on a multivariate Gaussian mixture model (Algorithm 1). However, first, let us define the parametric estimate of the characteristic function of a uniform distribution density:
ψ ^ ( u ) = 2 ( b a ) u sin ( b a ) u 2 · e i u ( a + b ) 2
In the formula for calculating the density estimate, construct the estimate of the characteristic Function 9 as a union of the characteristic functions of a mixture of Gaussian distributions and a uniform distribution with corresponding a priori probabilities.
ψ ^ τ ( u ) = k = 1 q ^ τ p ^ k , τ e i u m ^ k , τ u 2 σ ^ k , τ 2 / 2 + p ^ 0 , τ 2 ( b a ) u sin ( b a ) u 2 · e i u ( a + b ) 2
Here, the second term describes the noise cluster with an even distribution, and p ^ 0 is the weight of the noise cluster,
a ( τ ) = ( τ x ) min ( τ x ) max ( τ x ) min 2 ( n 1 )
b ( τ ) = ( τ x ) max + ( τ x ) max ( τ x ) min 2 ( n 1 ) .

2.3. Modified Inversion Density Clustering Algorithm

This section aims to overview the critical aspects of the new modified inversion density estimation (MIDE) clustering method (Algorithm 1). This clustering algorithm uses the EM (expectation maximization) algorithm. The selection of the initial parameters of the EM algorithm is of particular importance for the clustering results, as each new combination of parameters can steer the cluster in a different direction. Random parameter selection is one of the most commonly used solutions for parameter initialization [23,24]. Random selection of initial parameters is a reasonably simple solution, as it is easy to implement. However, one of the significant disadvantages of this method is that such initialization often results in significant deviations in the clustering results. In addition, the algorithm uses a continuous partition to initialize the initial cluster centers.
p ( x ) = 1 b a
In addition, another method of initialization presented in the software algorithm solution is the selection of random points. In this case, the initial cluster centers are selected not randomly from the entire space but by randomly selecting a point from the observations in the data set. However, this selection also has several drawbacks, as randomly selected points can be too close to each other, and selected points can also be exceptions in the data.
Hierarchical clustering can also be used to address the potential shortcomings of the random cluster center selection method. For the first time, such a classification algorithm that maintains a Gaussian mixture model to form a cluster tree was described by Fraley in 1998 [25]. Maitra [26] proposed a hierarchical clustering based on mean connectivity to obtain an initial model mean. Moreover, Meila and Heckerman [27] experimentally demonstrated that an algorithm using a pattern-based distance measure is better than a random method. This method is applied to the initial mean of the model. Perhaps the only major drawback it has observed so far is that computer computations take a long time and require a large amount of computer memory if there are many data.
One of the most commonly used methods for selecting cluster centers is the k-means and other heuristic clustering methods. This is one of the most widely used initial parameters selection methods. In the case of the initialization of K-means, firstly, the random cluster centers μ 1 , μ 2 , , μ k   n are first selected, and then the procedure is performed until convergence is achieved.
c ( i ) = arg m i n j x ( i ) μ j 2
μ j = i = 1 m 1 { c ( i ) = j } x ( i ) i = 1 m { c ( i ) = j }
Clustering using the modified inversion formula density estimation and the EM algorithm is explained below. If the distribution density X of a random vector has q maxima, then it can be approximated by a mixture of q single-mode distribution densities:
f ( x ) = k = 1 q p k f k ( x )
Suppose that the distribution of X depends on a random variable v, which acquires the values 1, …, q with the corresponding probabilities p1, …, pq. In classification theory, v is interpreted as the number of the class to which the observed object belongs. Thus, the X(t) observations would correspond to v(t), t = 1, …, n. The functions fk are treated as the density of the conditional distribution X under the condition v = k. Based on this approach, loose clustering of the sample is understood as a posteriori probabilities.
π k ( x ) = P { υ = k | X = x }
when all x ϵ { X ( 1 ) , , X ( n ) } . Strict clustering of the sample would be an estimate of the random variables v(1), …, v(n) Take a breakdown into subsets based on equality
v ^ ( t ) = arg m a x k = 1 , , q π ^ k ( X ( t ) )
The estimates π ^ k are obtained by approximating the unknown distribution density components with the density estimates of the inversion formula and using the EM (expectation maximization) algorithm. We briefly describe it as follows. Suppose that Equation (15) holds and fk is the density function of the inversion formula for the Gaussian mixture model, k = 1, …, q, where q is the number of the clusters. In this case (17), let us denote the right side of the equation by f(x,θ), where θ = (pk, Mk, Rk, a, b, k = 1, …, q). Equality applies:
π k ( x ) = p k f k ( x ) f ( x , θ )   and   k = 1 , q ¯
Having an estimate of θ, the estimates of the probabilities πk (k-th cluster probability) are obtained from (20) using the “embedding” method, i. y. replacing the unknown parameters on the right with their statistical estimates. The EM algorithm is a reciprocal procedure for estimating the maximum likelihood
θ * = arg m a x θ L ( θ ) ,   L ( θ ) = t = 1 n f ( X ( t ) , θ )
and to calculate the corresponding estimates π ^ k . Several authors have independently proposed this algorithm for Gaussian mixture analysis, including Hasselblad [28] and Behboodian [29]. Its properties were later well examined in refs. [30,31,32] and other works. The EM algorithm has received much attention in various review articles and monographs [33,34,35]. Suppose that after r cycles, we obtain the estimates π ^ k = π ^ k ( r ) . The new estimate θ ^ = θ ^ ( r + 1 ) is then defined by the equations:
p ^ k = 1 n t = 1 n π ^ k ( X ( t ) )
M ^ ( k ) = 1 n p ^ k t = 1 n π ^ k ( X ( t ) ) X ( t )
R ^ ( k ) = 1 n p ^ k t = 1 n π ^ k ( X ( t ) ) [ X ( t ) M ^ ( k ) ] [ X ( t ) M ^ ( k ) ]
where k = 1, …, q. Entering θ ^ ( r + 1 ) to the right of (20), we find π ^ ( r + 1 ) ( X ( t ) ) , k =   1 , q ¯ , t = 1 , n ¯ . As a result of this recursive procedure, we obtain a non-decreasing sequence L ( θ ^ ( r ) ) , but whether it converges to the point of the global maximum depends very much on the initial estimate θ ^ ( 0 )   (or π ^ ( 0 ) ).
Algorithm 1: Clustering Algorithm Based on the Modified Inversion Formula Density Estimation (MIDE)
Input: Data set X= [X1, X2, …, Xn], cluster number K
Output: C1, C2, _, Ct and M ^ , p ^ k , R ^
Possible initiation of mean vector:
(1) random uniform initialization
(2) k-means
(3) random point initialization
Generate a T matrix. The set T is calculated when the design directions are evenly spaced on the sphere.
1For i = 1: t do
2Density estimation for each point and cluster based on (9)
Update M ^ , p ^ k , R ^ values based on (22, 23, 24)
4End
5Return C1, C2, _, Ct and M ^ , p ^ k , R ^
In the case of the high mixture model (GMM), the best number of clusters is selected based on the information criterion. This algorithm’s most commonly used information criteria are AIC, BIC, and others. When these information criteria reach their global minimum or maximum, an optimal number of clusters can be said to have been reached. However, there are also some problems in applying these criteria. First, it is necessary to calculate the global maximum of the function as the maximum value of the local maxima, but sometimes this is performed with exceptions. Therefore, applying any procedure cannot guarantee that such a global maximum will be found in such a case.
On the other hand, applying these criteria assumes that one of the parametric methods being compared is correct. This assumption makes the criterion unstable. The arguments presented to raise the question of whether it may be worthwhile to use nonparametric criteria to test the adequacy of the distribution mix model. Several problems can be encountered if the correct number of clusters is not selected. If the number of components selected is too small, then no clear clusters are formed, and one cluster includes more. Meanwhile, if the selected number of clusters is too large, it is much more challenging to calculate clusters in the first place, and less generalizing clusters are also obtained. An attempt to accurately select the number of clusters was provided by Xie, et al. [36], in which an adaptive selection of components/clusters of the Gaussian mixture model was proposed.

3. Experimental Analyses

This section provides information about the modified inversion function based on the proposed clustering method. This section consists of three parts. The first part provides information on the clustering assessment methods used in the empirical study. The second part of this chapter provides information on the data sets used in the study. Finally, the third part of the chapter presents the study’s main results.

3.1. Evaluation Metrics

This section presents the main evaluation metrics used in the empirical study. In order to evaluate the results of clustering, it is essential to choose the appropriate evaluation metrics, as they can also determine the evaluation of clustering. In this study, clustering methods were used to compare J-Score [37], Normalized Mutual Information (NMI) [38], Adjusted Rand Index (ARI) [39] and Accuracy (ACC) [40], and The Fowlkes–Mallows index (FMI) [41]. These metrics were chosen based on the fact that the actual data clusters are known in advance because if the clusters were not known in advance, then the evaluation metrics could be: Calinski and Harabasz score, also known as Variance Ratio Criterion [42], Davies–Bouldin score [43], or others.
J-score. Ahmadinejad and Liu [37] suggested a new clustering evaluation metric, J-score. The J-score is a simple and robust measure of clustering accuracy. It addresses the matching problem and reduces the risk of overfitting that challenge existing accuracy measures [37]. Bidirectional set matching: Suppose a dataset contains N datapoints belonging to T true classes, and cluster analysis produces K hypothetical clusters. To establish the correspondence between T and K, we first considered each class as reference and identify its best-matched cluster (TK). Specifically, for a class tT, we searched for a cluster kK that has the highest Jaccard index,
I t = max k K | V t V k | | V t V k |
where Vt and Vk are the set of datapoints belonging to class t and cluster k, respectively, and|·| denotes the size of a set. We then considered each cluster as a reference and identified its best-matched class (KT) using a similar procedure. For a cluster kK, we searched for a class tT with the highest Jaccard index,
I k = max t T | V t V k | | V t V k |
Calculating overall accuracy: To quantify the accuracy, we aggregated Jaccard indices of individual clusters and classes, accounting for their relative sizes (i.e., number of data points). We first calculated a weighted sum of It across all classes as R = t T ( V t N I t ) , and a weighted sum of Ik across all clusters as P = k K ( V k N I k ) . We then took their harmonic mean as J score,
J =   2 × R × P R + P
To work with this metric, we implemented the calculation of the J-score metric in the Python programming language. The program code is available in Appendix A.
Normalized Mutual Information (NMI). The mutual information (MI) of two random variables is a measure of the mutual dependence of the two variables. MI normalization was performed for greater comparability and better interpretation, and the NMI metric was obtained. The values of this metric can range from 0 to 1.0; in this case, zero would indicate no relationship between the variables, while one would indicate a perfect correlation.
N M I = M I ( Y , Y ) H ( Y ) H ( Y )
Here, Y are the predicted labels and Y are the actual classes known in advance. M I ( Y , Y ) is the mutual information between predicted labels and actual labels. This formula also uses the entropy H ( * ) of predicted labels and actual labels.
Adjusted Rand Index (ARI). The Rand index evaluates the similarity between two clusters. Pairs of all observations are used to calculate this similarity. When calculating this index, there are observations in the assignment to clusters, and how this coincides with the real labels of the clusters.
A R I = 2 ( a d b c ) ( a + b ) ( b + d ) + ( a + c ) ( c + d )
In the given formula, a is a number that describes how many data points are correctly assigned to a cluster. B is the number of observations in a pair assigned to the same cluster (predicted and actual cluster values match). Here, c is the number of observations in a pair for which the predicted cluster matches, but the actual cluster values do not match. Finally, D is the number of data points in a pair that neither the predicted case nor the actual case belongs to the same cluster.
Accuracy (ACC). Accuracy is often used to measure the quality of classification. It is also used for clustering. It is calculated as the sum of the diagonal elements of the confusion matrix, divided by the number of samples to obtain a value between 0 and 1.
A C C = 1 N i = 1 k n i
where N is the total number of data points in the dataset, ni is the number of data points correctly divided into the corresponding cluster i, and k is the cluster number.
The Fowlkes–Mallows index (FMI). The Fowlkes–Mallows score FMI is defined as the geometric mean of pairwise precision and recall.
F M I = T P ( T P + F P ) ( T P + F N )
True Positive (TP) is the number of pairs of points belonging to the same clusters (true label = predicted label). False Positive (FP) is the number of the points that belong to the same cluster in the true labels but do not belong to the same cluster in the predicted clusters. False Negative (FN) is the number of the pairs of points that belong in the same clusters in the predicted labels and not in the true labels. The higher the metric value, the better the cluster separation is (the maximum possible value of the metric is 1, and the minimum is 0).

3.2. Experimental Datasets

To test the developed method and compare it with other methods, 25 data sets were used in this study. Data sets can be divided into three categories: synthetic, real, and generated data with outliers. Synthetic data sets are data sets that have been generated by other authors and are often used in research on clustering methods. Actual datasets include datasets such as Iris, Wine, Diabetes, and others, and these datasets are also selected based on datasets used by other authors. The third category of generated data with outliers is generated as Gaussian data, including a certain amount of outliers: 0.5%, 1%, 2%, and 4%. These datasets aim to evaluate how different methods work with data with an appropriate amount of outliers. The table below (see Table 1) shows the data sets used.

3.3. Performances of Clustering Methods

To avoid the possible influence of successful parameter initialization on test results, all experiments were performed 10,000 times. For the k-means method, initial cluster centers were randomly selected based on the 100 runs best solutions. For GMM (Gaussian Mixture Model), BGMM (Bayesian Gaussian Mixture Model) and clustering based on modified inversion density estimation (MIDE) initial center were selected based on the k-means centers initialization. The following table provides information on the Accuracy metric values for the different clustering algorithms. Other evaluation metrics like NMI, ARI, FMI, and J-Score can be found in the Appendix B tables.
The accuracy results for different datasets are presented in Table 2. It can be seen that the new method based on the inversion formula works quite well with different datasets compared with K-means, Gaussian Mixture Model (GMM), and Bayesian Gaussian Mixture model (BGMM).

4. Discussion

Research focuses a lot on developing new density estimation procedures [17,18]. Moreover, in the last years, different scientists started to propose different robust density estimation methods even based on neural networks such as Parzen neural networks [19], soft constrained neural networks [20], and others [21]. This paper presented a new clustering method based on the modified inversion formula density estimation (MIDE). This new method improves the performance and robustness of the k-means, Gaussian mixture model, and other methods. Method working: Firstly, we initialized parameters and generated a T matrix. Secondly, we estimated the densities of each point and cluster based on the modified inversion formula. Third, we updated mean, sigma, and phi matrices. Based on the results presented earlier, it is possible to conclude that the newly presented method works well with different clustering datasets even if the datasets do not have any outliers. Results based on the generated clusters data with outliers showed that the newly presented method (MIDEv2) works the best in all situations (0.5%, 1%, 2%, and 4%). Based on the accuracy metric with all of these datasets, accuracy was higher than 0.995. The interesting point is that a new method based on the inversion formula can cluster the data even if data do not have outliers; one of the most popular, for example, is the Iris data set. When we compared the accuracy results in other datasets, it can be mentioned that the MIDE method achieved 0.955 accuracy on the Iris dataset compared with the second-best GMM method with 0.953 accuracy; using the ARI metric for this dataset, MIDE methods as well showed better results compared with other methods. Based on the NMI, J-Score, and FMI metrics (see Table A1), a better method for the Iris dataset would be GMM. It is hard to compare and use multiple metrics because, in this research, we used accuracy as our main metric. After all, all datasets have labels, and it is possible to calculate accuracy of our clustering methods. Compared with other researchers’ results in the past, Sun et al. were able to achieve 0.925 accuracy with the SVC-KM approach [44], and Hyde and Angelov achieved 0.950 accuracy with DDC (Data Density-Based Clustering) [45]. Additonally, it is notable that the MIDE method has a lower standard deviation than other methods used in this research. It is worth mentioning that this method also has limitations. Based on the experimental study, this one method in the current state can not work with higher dimensional data (d > 15). This occurs due to T matrix generation; as dimensions grow, finding a suitable T matrix becomes harder. This one will be solved in the future versions of the model; we will present more about it in future work. Another method problem is speed; the current stage method is slower than other methods, but this problem can be solved with parallelization of the process on the programming side. The future direction of the newly created method is this method application for deep clustering. It can be seen that MIDEv1 and MIDEv2 methods do not work very well with higher-dimension data. Due to that, the deep clustering method with an encoder structure could solve this problem.

Author Contributions

Conceptualization, T.R. and M.L.; methodology, T.R.; software, T.R. and M.L.; formal analysis, T.R. and M.L.; investigation, T.R. and M.L.; writing—original draft preparation, T.R., M.L.; writing—review and editing, M.L.; supervision, T.R. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Acknowledgments

The authors thank the area editor and the reviewers for giving valuable comments and suggestions.

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A

J-score metric calculation program code with Python language
import numpy as np
 
def JScore(truth, pred):
  if (len(truth) == len(pred)):
    print(“Equal lengths”)
 
    A = np.empty([0, len(truth)], bool)
    test = list(set(pred))
    for i in test:
      A = np.vstack([A, (np.array(pred) == i)])
    suma = A.sum(axis=1)
 
    B = np.empty([0, len(truth)], bool)
    test = list(set(truth))
    for i in test:
      B = np.vstack([B, (np.array(truth) == i)])
    suma2 = B.sum(axis=1)
 
    C = np.empty([len(suma), len(suma2)], float)
 
    for i in range(0, len(suma)):
      for j in range(0, len(suma2)):
        C[i, j] = sum(A[i,] & B[j,])/sum(A[i,] | B[j,])
 
    M1 = sum(np.amax(C, axis=1) * suma)/A.shape[1]
    M11 = sum(np.amax(C, axis=0) * suma2)/A.shape[1]
 
    M2 = 2 * M1 * M11/(M1 + M11)
 
    return M2
  else:
    print(‘Truth and Pred have different lengths.’)

Appendix B

Table A1. Comparative table of different models (means and standard deviation) based on Normalized Mutual Information (NMI) for 10,000 runs.
Table A1. Comparative table of different models (means and standard deviation) based on Normalized Mutual Information (NMI) for 10,000 runs.
DatasetK-MeansGMMBGMMMIDEv1MIDEv2
MeanStdMeanStdMeanStdMeanStdMeanStd
Synthetic
Aggregation0.8360.0040.8860.0350.9090.0410.7790.0060.8450.005
Atom0.2890.0030.1700.0360.1940.0280.3100.0040.3190.003
D310.9690.0050.9510.0080.8710.0040.7910.0070.8220.006
R150.9940.0000.9890.0120.8680.0140.8810.0010.9090.001
Gaussians11.0000.0001.0000.0001.0000.0001.0000.0001.0000.000
Threenorm0.0240.0010.0470.0390.0070.0020.0690.0010.0760.001
Twenty1.0000.0000.9960.0080.9560.0261.0000.0000.9880.005
Wingnut0.5620.0000.7780.0020.7790.0000.4590.0000.4200.001
Real
Breast0.5470.0110.6590.0030.6300.003----
CPU0.4870.0130.3980.0250.3890.0330.4670.0130.5290.011
Dermatology0.8620.0090.8090.0440.8620.049----
Diabetes0.0900.0040.0840.0410.1050.0170.0890.0040.1060.003
Ecoli0.6360.0040.6360.0160.6390.0100.5920.0040.5340.004
Glass0.3030.0190.3270.0520.3640.0420.3040.0200.3690.024
Heart-statlog0.3630.0050.2700.0550.2630.0580.3390.0080.3080.007
Iono0.1250.0000.3050.0520.2990.024----
Iris0.6570.0060.8900.040.7510.0110.8410.0070.7630.008
Wine0.8760.0000.8560.0550.9260.0540.8220.0010.7990.003
Thyroid0.5590.0000.7830.0590.6610.0510.3820.0090.3900.008
Generated clusters with outliers
2 clusters (0.5% outliers)0.9760.0000.9760.0000.9760.0000.9770.0001.0000.000
2 clusters (1% outliers)0.9470.0000.9570.0000.9570.0000.9580.0000.9740.000
2 clusters (2% outliers)0.9160.0000.9250.0000.9250.0000.9280.0000.9760.000
2 clusters (4% outliers)0.8670.0000.8760.0000.8760.0000.8860.0000.9720.000
3 clusters (0.5% outliers)0.9780.0000.9780.0000.9780.0000.9780.0000.9930.000
3 clusters (1% outliers)0.9640.0000.9640.0000.9640.0000.9640.0000.9860.000
3 clusters (2% outliers)0.9430.0000.9430.0000.9430.0000.9450.0000.9850.000
3 clusters (4% outliers)0.9070.0000.9010.0000.8980.0000.9110.0000.9820.000
Bold underlined values indicate best results for each dataset.
Table A2. Comparative analysis of different models (means and standard deviation) based on the adjusted Rand Index (ARI) for 10,000 runs.
Table A2. Comparative analysis of different models (means and standard deviation) based on the adjusted Rand Index (ARI) for 10,000 runs.
DatasetK-MeansGMMBGMMMIDEv1MIDEv2
MeanStdMeanStdMeanStdMeanStdMeanStd
Synthetic
Aggregation0.7250.0080.7950.0690.8600.0890.6870.0350.8620.023
Atom0.1760.0030.0580.0280.0760.0240.2040.0060.2210.004
D310.9490.0160.9030.0270.6340.0170.4940.0370.5290.026
R150.9930.0000.9750.0360.6080.0200.7470.0210.7860.018
Gaussians11.0000.0001.0000.0001.0000.0001.0000.0001.0001.000
Threenorm0.0320.0010.0580.0450.0090.0020.0880.0030.0890.002
Twenty1.0000.0000.9860.0280.8360.0961.0000.0001.0000.000
Wingnut0.6700.0000.8620.0010.8630.0000.5650.0070.5330.005
Real
Breast0.6640.0080.7720.0030.7470.003----
CPU0.5290.0140.3150.0700.3360.0810.4610.0430.7080.026
Dermatology0.7120.0380.6970.0960.7280.112----
Diabetes0.0580.0030.0590.0460.0790.0280.0590.0050.0860.002
Ecoli0.5050.0080.6490.0110.6650.0140.5510.0130.4230.015
Glass0.1620.0140.1780.0550.2110.0400.1510.0240.2290.011
Heart-statlog0.4510.0050.3520.0720.3440.0750.4220.0130.4520.011
Iono0.1680.0000.3830.0660.3680.049----
Iris0.6170.0090.8880.0770.6540.0300.8190.0290.8880.008
Wine0.8970.0000.8690.0720.9320.0630.8350.0310.8650.012
Thyroid0.5830.0000.8500.0750.7350.0740.2970.0450.3560.015
Generated blobs with outliers
2 clusters (0.5% outliers)0.9910.0000.9900.0000.9900.0000.9930.0001.0000.000
2 clusters (1% outliers)0.9760.0000.9800.0000.9800.0000.9800.0000.9920.000
2 clusters (2% outliers)0.9570.0000.9610.0000.9610.0000.9610.0000.9890.000
2 clusters (4% outliers)0.9200.0000.9240.0000.9240.0000.9280.0000.9900.000
3 clusters (0.5% outliers)0.9900.0000.9900.0000.9900.0000.9910.0000.9970.000
3 clusters (1% outliers)0.9820.0000.9820.0000.9820.0000.9840.0000.9930.000
3 clusters (2% outliers)0.9670.0000.9670.0000.9670.0000.9670.0000.9930.000
3 clusters (4% outliers)0.9380.0000.9250.0000.9180.0000.9410.0000.9920.000
Bold underlined values indicate best results for each dataset.
Table A3. Comparative table of different models (means and standard deviation) based on the J-Score for 10,000 runs.
Table A3. Comparative table of different models (means and standard deviation) based on the J-Score for 10,000 runs.
DatasetK-MeansGMMBGMMMIDEv1MIDEv2
MeanStdMeanStdMeanStdMeanStdMeanStd
Synthetic
Aggregation0.7800.0070.8000.0710.8700.0620.8310.0090.8710.012
Atom0.5560.0020.5010.0040.5030.0050.5750.0040.5820.004
D310.9510.0170.9010.0290.5810.0190.5560.0310.6090.042
R150.9930.0000.9750.0380.6640.0110.7560.0410.8340.027
Gaussians11.0000.0001.0000.0001.0000.0001.0000.0001.0000.000
Threenorm0.4200.0010.4430.0500.3810.0050.4810.0030.4960.004
Twenty1.0000.0000.9840.0300.8380.0751.0000.0020.9860.005
Wingnut0.8340.0000.9310.0010.9320.0000.7790.0000.8080.000
Real
Breast0.8330.0040.8870.0010.8740.002----
CPU0.6560.0130.4890.0580.5000.0770.7330.0110.7510.010
Dermatology0.7190.0380.6990.0790.7300.106----
Diabetes0.2520.0040.2830.0330.2990.0280.2750.0080.3070.004
Ecoli0.5570.0090.6550.0180.6630.0060.6060.0080.6630.007
Glass0.3400.0100.3620.0360.3650.0320.3970.0120.4120.009
Heart-statlog0.7200.0030.6630.0430.6590.0450.7140.0050.7270.004
Iono0.5490.0000.6860.0180.6730.031----
Iris0.7300.0080.9230.0640.7520.0290.8890.0120.9050.009
Wine0.9350.0000.9170.0520.9580.0460.9040.0120.9170.011
Thyroid0.7870.0000.9140.0350.8560.0380.6390.0070.6750.008
Generated blobs with outliers
2 clusters (0.5% outliers)0.9930.0000.9930.0000.9930.0000.9930.0001.0000.000
2 clusters (1% outliers)0.9830.0000.9850.0000.9850.0000.9850.0000.9910.000
2 clusters (2% outliers)0.9690.0000.9710.0000.9710.0000.9720.0000.9940.000
2 clusters (4% outliers)0.9420.0000.9440.0000.9440.0000.9460.0000.9960.000
3 clusters (0.5% outliers)0.9910.0000.9910.0000.9910.0000.9930.0000.9980.000
3 clusters (1% outliers)0.9830.0000.9830.0000.9830.0000.9850.0000.9950.000
3 clusters (2% outliers)0.9690.0000.9690.0000.9690.0000.9720.0000.9940.000
3 clusters (4% outliers)0.9410.0000.9320.0000.9270.0000.9450.0000.9930.000
Bold underlined values indicate best results for each dataset.
Table A4. Different models were compared (means and standard deviation) based on the Fowlkes–Mallows index (FMI) for 10,000 runs.
Table A4. Different models were compared (means and standard deviation) based on the Fowlkes–Mallows index (FMI) for 10,000 runs.
DatasetK-MeansGMMBGMMMIDEv1MIDEv2
MeanStdMeanStdMeanStdMeanStdMeanStd
Synthetic
Aggregation0.7850.0060.8400.0550.8910.0700.8750.0110.8670.015
Atom0.6540.0010.6530.0060.6490.0030.6590.0020.6690.003
D310.9510.0150.9060.0250.6810.0120.6450.0110.6890.016
R150.9930.0000.9770.0330.6820.0160.7790.0110.8170.009
Gaussians11.0000.0001.0000.0001.0000.0001.0000.0001.0000.000
Threenorm0.5180.0000.5350.0300.5140.0020.5520.0020.5590.003
Twenty1.0000.0000.9870.0260.8570.0751.0000.0000.9840.004
Wingnut0.8350.0000.9310.0010.9320.0000.7920.0010.7640.001
Real
Breast0.8470.0040.8930.0010.8810.001----
CPU0.7710.0060.6190.0520.6330.0650.8020.0120.8710.009
Dermatology0.7690.0300.7600.0740.7840.087----
Diabetes0.3260.0020.3820.0170.3780.0280.3750.0080.3890.007
Ecoli0.6250.0060.7400.0080.7620.0090.6780.0060.6980.006
Glass0.3930.0120.4350.0580.4370.0480.5400.0210.5190.015
Heart-statlog0.7340.0020.6830.0260.6790.0280.7240.0110.7370.009
Iono0.6010.0000.7110.0040.6980.023----
Iris0.7430.0060.9270.0410.7810.0110.8990.0050.8770.005
Wine0.9320.0000.9140.0420.9550.0380.8950.0110.8860.008
Thyroid0.8410.0000.9310.0230.8880.0220.7050.0130.7360.009
Generated blobs with outliers
2 clusters (0.5% outliers)0.9950.0000.9950.0000.9950.0000.9960.0001.0000.000
2 clusters (1% outliers)0.9880.0000.9900.0000.9900.0000.9900.0000.9940.000
2 clusters (2% outliers)0.9780.0000.9800.0000.9800.0000.9810.0000.9960.000
2 clusters (4% outliers)0.9600.0000.9610.0000.9510.0000.9630.0000.9950.000
3 clusters (0.5% outliers)0.9930.0000.9930.0000.9930.0000.9930.0000.9980.000
3 clusters (1% outliers)0.9880.0000.9880.0000.9880.0000.9910.0000.9960.000
3 clusters (2% outliers)0.9780.0000.9780.0000.9780.0000.9810.0000.9960.000
3 clusters (4% outliers)0.9590.0000.9510.0000.9480.0000.9640.0000.9950.000
Bold underlined values indicate best results for each dataset.

References

  1. Ding, S.; Jia, H.; Du, M.; Xue, Y. A semi-supervised approximate spectral clustering algorithm based on HMRF model. Inf. Sci. 2018, 429, 215–228. [Google Scholar] [CrossRef]
  2. Liu, A.-A.; Nie, W.-Z.; Gao, Y.; Su, Y.-T. View-based 3-D model retrieval: A benchmark. IEEE Trans. Cybern. 2017, 48, 916–928. [Google Scholar] [CrossRef] [PubMed]
  3. Nie, W.; Cheng, H.; Su, Y. Modeling temporal information of mitotic for mitotic event detection. IEEE Trans. Big Data 2017, 3, 458–469. [Google Scholar] [CrossRef]
  4. Karim, M.R.; Beyan, O.; Zappa, A.; Costa, I.G.; Rebholz-Schuhmann, D.; Cochez, M.; Decker, S. Deep learning-based clustering approaches for bioinformatics. Brief. Bioinform. 2021, 22, 393–415. [Google Scholar] [CrossRef] [Green Version]
  5. Kim, T.; Chen, I.R.; Lin, Y.; Wang, A.Y.Y.; Yang, J.Y.H.; Yang, P. Impact of similarity metrics on single-cell RNA-seq data clustering. Brief. Bioinform. 2019, 20, 2316–2326. [Google Scholar] [CrossRef] [PubMed]
  6. Govender, P.; Sivakumar, V. Application of k-means and hierarchical clustering techniques for analysis of air pollution: A review (1980–2019). Atmos. Pollut. Res. 2020, 11, 40–56. [Google Scholar] [CrossRef]
  7. Xu, S.; Yang, X.; Yu, H.; Yu, D.-J.; Yang, J.; Tsang, E.C. Multi-label learning with label-specific feature reduction. Knowl. -Based Syst. 2016, 104, 52–61. [Google Scholar] [CrossRef]
  8. Liu, K.; Yang, X.; Yu, H.; Mi, J.; Wang, P.; Chen, X. Rough set based semi-supervised feature selection via ensemble selector. Knowl. -Based Syst. 2019, 165, 282–296. [Google Scholar] [CrossRef]
  9. Wiwie, C.; Baumbach, J.; Röttger, R. Comparing the performance of biomedical clustering methods. Nat. Methods 2015, 12, 1033–1038. [Google Scholar] [CrossRef] [PubMed]
  10. Chen, C.-H. A hybrid intelligent model of analyzing clinical breast cancer data using clustering techniques with feature selection. Appl. Soft Comput. 2014, 20, 4–14. [Google Scholar] [CrossRef]
  11. Polat, K. Classification of Parkinson’s disease using feature weighting method on the basis of fuzzy C-means clustering. Int. J. Syst. Sci. 2012, 43, 597–609. [Google Scholar] [CrossRef]
  12. Nilashi, M.; Ibrahim, O.; Ahani, A. Accuracy improvement for predicting Parkinson’s disease progression. Sci. Rep. 2016, 6, 1–18. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  13. Trevithick, L.; Painter, J.; Keown, P. Mental health clustering and diagnosis in psychiatric in-patients. BJPsych Bull. 2015, 39, 119–123. [Google Scholar] [CrossRef] [PubMed]
  14. Yilmaz, N.; Inan, O.; Uzer, M.S. A new data preparation method based on clustering algorithms for diagnosis systems of heart and diabetes diseases. J. Med. Syst. 2014, 38, 48–59. [Google Scholar] [CrossRef]
  15. Alashwal, H.; El Halaby, M.; Crouse, J.J.; Abdalla, A.; Moustafa, A.A. The application of unsupervised clustering methods to Alzheimer’s disease. Front. Comput. Neurosci. 2019, 13, 31. [Google Scholar] [CrossRef]
  16. Farouk, Y.; Rady, S. Early diagnosis of alzheimer’s disease using unsupervised clustering. Int. J. Intell. Comput. Inf. Sci. 2020, 20, 112–124. [Google Scholar] [CrossRef]
  17. Li, D.; Yang, K.; Wong, W.H. Density estimation via discrepancy based adaptive sequential partition. Adv. Neural Inf. Process. Syst. 2016, 29. [Google Scholar]
  18. Rothfuss, J.; Ferreira, F.; Walther, S.; Ulrich, M. Conditional density estimation with neural networks: Best practices and benchmarks. arXiv 2019, arXiv:1903.00954. [Google Scholar]
  19. Trentin, E.; Lusnig, L.; Cavalli, F. Parzen neural networks: Fundamentals, properties, and an application to forensic anthropology. Neural Netw. 2018, 97, 137–151. [Google Scholar] [CrossRef]
  20. Trentin, E. Soft-constrained neural networks for nonparametric density estimation. Neural Process. Lett. 2018, 48, 915–932. [Google Scholar] [CrossRef]
  21. Huynh, H.T.; Nguyen, L. Nonparametric maximum likelihood estimation using neural networks. Pattern Recognit. Lett. 2020, 138, 580–586. [Google Scholar] [CrossRef]
  22. Ruzgas, T.; Lukauskas, M.; Čepkauskas, G. Nonparametric Multivariate Density Estimation: Case Study of Cauchy Mixture Model. Mathematics 2021, 9, 2717. [Google Scholar] [CrossRef]
  23. Biernacki, C.; Celeux, G.; Govaert, G. Choosing starting values for the EM algorithm for getting the highest likelihood in multivariate Gaussian mixture models. Comput. Stat. Data Anal. 2003, 41, 561–575. [Google Scholar] [CrossRef]
  24. Xu, Q.; Yuan, S.; Huang, T. Multi-dimensional uniform initialization Gaussian mixture model for spar crack quantification under uncertainty. Sensors 2021, 21, 1283. [Google Scholar] [CrossRef]
  25. Fraley, C. Algorithms for model-based Gaussian hierarchical clustering. SIAM J. Sci. Comput. 1998, 20, 270–281. [Google Scholar] [CrossRef] [Green Version]
  26. Maitra, R. Initializing partition-optimization algorithms. IEEE/ACM Trans. Comput. Biol. Bioinform. 2009, 6, 144–157. [Google Scholar] [CrossRef] [Green Version]
  27. Meila, M.; Heckerman, D. An experimental comparison of several clustering and initialization methods. arXiv 2013, arXiv:1301.7401. [Google Scholar]
  28. Hasselblad, V. Estimation of parameters for a mixture of normal distributions. Technometrics 1966, 8, 431–444. [Google Scholar] [CrossRef]
  29. Behboodian, J. On a mixture of normal distributions. Biometrika 1970, 57, 215–217. [Google Scholar] [CrossRef]
  30. Ćwik, J.; Koronacki, J. Multivariate density estimation: A comparative study. Neural Comput. Appl. 1997, 6, 173–185. [Google Scholar] [CrossRef] [Green Version]
  31. Tsuda, K.; Akaho, S.; Asai, K. The em algorithm for kernel matrix completion with auxiliary data. J. Mach. Learn. Res. 2003, 4, 67–81. [Google Scholar]
  32. Lartigue, T.; Durrleman, S.; Allassonnière, S. Deterministic approximate EM algorithm; Application to the Riemann approximation EM and the tempered EM. Algorithms 2022, 15, 78. [Google Scholar] [CrossRef]
  33. Dempster, A.P.; Laird, N.M.; Rubin, D.B. Maximum likelihood from incomplete data via the EM algorithm. J. R. Stat. Soc. Ser. B 1977, 39, 1–22. [Google Scholar]
  34. Everitt, B. Finite Mixture Distributions; Springer: Berlin/Heidelberg, Germany, 2013. [Google Scholar]
  35. Redner, R.A.; Walker, H.F. Mixture densities, maximum likelihood and the EM algorithm. SIAM Rev. 1984, 26, 195–239. [Google Scholar] [CrossRef]
  36. Xie, C.-H.; Chang, J.-Y.; Liu, Y.-J. Estimating the number of components in Gaussian mixture models adaptively for medical image. Optik 2013, 124, 6216–6221. [Google Scholar] [CrossRef]
  37. Ahmadinejad, N.; Liu, L. J-Score: A Robust Measure of Clustering Accuracy. arXiv 2021, arXiv:2109.01306. [Google Scholar]
  38. Zhong, S.; Ghosh, J. Generative model-based document clustering: A comparative study. Knowl. Inf. Syst. 2005, 8, 374–384. [Google Scholar] [CrossRef]
  39. Lawrence, H.; Phipps, A. Comparing partitions. J. Classif. 1985, 2, 193–218. [Google Scholar]
  40. Wang, P.; Shi, H.; Yang, X.; Mi, J. Three-way k-means: Integrating k-means and three-way decision. Int. J. Mach. Learn. Cybern. 2019, 10, 2767–2777. [Google Scholar] [CrossRef]
  41. Fowlkes, E.B.; Mallows, C.L. A method for comparing two hierarchical clusterings. J. Am. Stat. Assoc. 1983, 78, 553–569. [Google Scholar] [CrossRef]
  42. Caliński, T.; Harabasz, J. A dendrite method for cluster analysis. Commun. Stat. -Theory Methods 1974, 3, 1–27. [Google Scholar] [CrossRef]
  43. Davies, D.L.; Bouldin, D.W. A cluster separation measure. IEEE Trans. Pattern Anal. Mach. Intell. 1979, 224–227. [Google Scholar] [CrossRef]
  44. Sun, Y.; Wang, Y.; Wang, J.; Du, W.; Zhou, C. A novel SVC method based on K-means. In Proceedings of the 2008 Second International Conference on Future Generation Communication and Networking, Hainan, China, 13–15 December 2008; pp. 55–58. [Google Scholar]
  45. Hyde, R.; Angelov, P. Data density based clustering. In Proceedings of the 2014 14th UK Workshop on Computational Intelligence (UKCI), Bradford, UK, 8–10 September 2014; pp. 1–7. [Google Scholar]
Table 1. A description of the data set used.
Table 1. A description of the data set used.
IDData SetsSample Size (N)Dimensions (D)Classes
Synthetic
1Aggregation78827
2Atom80032
3D313100231
4R15600215
5Gaussians110022
6Threenorm100022
7Twenty1000220
8Wingnut101622
Real
9Breast570302
10CPU20964
11Dermatology366176
12Diabetes442104
13Ecoli33678
14Glass21496
15Heart-statlog270132
16Iono351342
17Iris15043
18Wine178133
19Thyroid21553
Generated clusters with outliers
202 clusters (0.5% outliers)100522
212 clusters (1% outliers)101022
222 clusters (2% outliers)102022
232 clusters (4% outliers)104022
253 clusters (0.5% outliers)100523
263 clusters (1% outliers)101023
273 clusters (2% outliers)102023
283 clusters (4% outliers)104023
Table 2. Different models were comparative (means and standard deviation) based on the accuracy (ACC) for 10,000 runs.
Table 2. Different models were comparative (means and standard deviation) based on the accuracy (ACC) for 10,000 runs.
DatasetK-MeansGMMBGMMMIDEv1MIDEv2
MeanStdMeanStdMeanStdMeanStdMeanStd
Synthetic
Aggregation0.8570.0050.8350.0750.9070.0420.8890.0080.8950.009
Atom0.7100.0020.6180.0280.6370.0220.7230.0020.7460.004
D310.9720.0150.9280.0280.6010.0220.7210.0170.7230.013
R150.9970.0000.9790.0360.6690.0110.7680.0080.8550.007
Gaussians11.0000.0001.0000.0001.0000.0001.0000.0001.0000.000
Threenorm0.5910.0010.6120.0470.5490.0060.6490.0030.6790.003
Twenty1.0000.0000.9850.0290.8380.075----
Wingnut0.9090.0000.9640.0000.9650.0000.8760.0000.8800.000
Real
Breast0.9080.0030.9400.0010.9330.001----
CPU0.7380.0080.5740.0730.5900.0930.8080.0070.8280.006
Dermatology0.7390.0440.7370.0800.7560.109----
Diabetes0.3560.0100.4190.0430.4390.0330.4200.0080.4480.007
Ecoli0.6490.0130.7530.0180.7390.0060.7140.0110.7540.009
Glass0.4470.0160.4680.0250.4830.0250.4650.0130.4870.017
Heart-statlog0.8370.0020.7940.0450.7910.045----
Iono0.7070.0000.8100.0290.8030.023----
Iris0.8310.0070.9530.0650.8380.0490.9330.0060.9550.005
Wine0.9660.0000.9530.0480.9770.0380.9430.0030.9530.004
Thyroid0.8740.0000.9530.0290.9170.0350.7540.0070.7780.009
Generated blobs with outliers
2 clusters (0.5% outliers)0.9950.0000.9950.0000.9950.0000.9950.0001.0000.000
2 clusters (1% outliers)0.9890.0000.9900.0000.9900.0000.9900.0000.9960.000
2 clusters (2% outliers)0.9790.0000.9800.0000.9800.0000.9810.0000.9970.000
2 clusters (4% outliers)0.9610.0000.9620.0000.9620.0000.9640.0000.9960.000
3 clusters (0.5% outliers)0.9940.0000.9940.0000.9940.0000.9940.0000.9990.000
3 clusters (1% outliers)0.9890.0000.9890.0000.9890.0000.9890.0000.9970.000
3 clusters (2% outliers)0.9790.0000.9790.0000.9790.0000.9810.0000.9970.000
3 clusters (4% outliers)0.9610.0000.9510.0000.9450.0000.9650.0000.9960.000
Bold underlined values indicate best results for each dataset.
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Lukauskas, M.; Ruzgas, T. A New Clustering Method Based on the Inversion Formula. Mathematics 2022, 10, 2559. https://doi.org/10.3390/math10152559

AMA Style

Lukauskas M, Ruzgas T. A New Clustering Method Based on the Inversion Formula. Mathematics. 2022; 10(15):2559. https://doi.org/10.3390/math10152559

Chicago/Turabian Style

Lukauskas, Mantas, and Tomas Ruzgas. 2022. "A New Clustering Method Based on the Inversion Formula" Mathematics 10, no. 15: 2559. https://doi.org/10.3390/math10152559

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop