# Simple K-Medoids Partitioning Algorithm for Mixed Variable Data

^{1}

^{2}

^{*}

## Abstract

**:**

## 1. Introduction

## 2. Related Works and the Proposed Method

#### 2.1. K-Medoids Algorithms

#### 2.2. Proposed K-Medoids Algorithm

- Select a set of initial medoids ${\mathcal{M}}_{k}$, i.e., selecting k objects from ${X}_{n}$, randomly.The random initialization is preferred because the random initialization repeated several times is the best approach [21]. However, a restriction is applied to a single point of the ${\mathcal{M}}_{k}$ set. One of the ${\mathcal{M}}_{k}$ sets, which is randomly selected, is replaced by the most centrally-located object. It is argued that an optimal medoid is when $k=1$ is the centrally-located object that minimizes the sum distance to the other objects. Then, the most centrally-located object can be selected based on the row/column sum of the distance matrix. It is given as:$$\begin{array}{c}\hfill {a}_{i}=\sum _{j=1}^{n}{d}_{ij}=\sum _{j=1}^{n}{d}_{ji},\phantom{\rule{1.em}{0ex}}i=1,2,3,\dots ,n,\end{array}$$
- Assign the label/membership of each object ${x}_{n}\in {X}_{n}$, $l\left({x}_{n}\right)$, to the closest medoid ${\mathcal{M}}_{k}$.When the medoid set of ${\mathcal{M}}_{k}$ is non-unique objects, an empty cluster can occur because the non-unique medoids and their closest objects will have the same membership label. To avoid an empty cluster, the non-unique medoids are restricted to preserve their label. While the closest objects to the non-unique centroids can be labeled as any cluster membership in the k-means case [18], we regulate that the closest objects to the non-unique medoids are assigned/labeled as only one of the medoids, i.e., a single cluster membership. Hence, this guarantees a faster medoid updating step provided that the non-unique medoid set is not the most centrally-located object.
- Update the new set of medoids ${\mathcal{M}}_{k}$, maintaining the clusters label $l\left({x}_{n}\right)$ fixed.$$\begin{array}{ccc}\hfill {m}_{p}& :=& \underset{p\phantom{\rule{0.222222em}{0ex}}\in \phantom{\rule{0.222222em}{0ex}}K}{argmin}\sum _{n:l\left({x}_{n}\right)={m}_{p}}d({x}_{n},{m}_{p}),\hfill \\ & & p=1,2,3,\dots ,k\hfill \end{array}$$A medoid, ${m}_{p}$, is defined as an object that minimizes the sum distance of this object to the other objects within its cluster.
- Calculate the sum of each cluster sum distance (E), i.e., the sum distance between objects and their medoids.$$\begin{array}{c}\hfill E=\sum _{p=1}^{k}\phantom{\rule{0.222222em}{0ex}}\sum _{n:l\left({x}_{n}\right)={m}_{p}}d({x}_{n},{m}_{p}).\end{array}$$
- Repeat Steps 2–4 until E is equal to the previous E, or the set of medoids ${\mathcal{M}}_{k}$ does not change, or a pre-determined number of iterations is attained.
- Repeat Steps 1–5 s times to attain multiple random seeding of the initial medoids.Note that the centrally-located object always becomes one of the initial medoids. Among the cluster medoids from the multiple random seeding s, the cluster medoid set that has a minimum value of E is selected as the final cluster medoids.
- Assign the membership of each object to the final medoids.

#### 2.3. Proposed Distance Method

## 3. Demonstration on Artificial and Real Datasets

#### 3.1. Data Simulation

#### 3.1.1. Different Variable Proportions

#### 3.1.2. Different Number of Clusters

#### 3.1.3. Different Numbers of Variables

#### 3.1.4. Different Numbers of Objects

#### 3.2. Real Datasets

#### 3.2.1. Iris Data

#### 3.2.2. Wine Data

#### 3.2.3. Soybean Data

#### 3.2.4. Vote Data

#### 3.2.5. Zoo Data

#### 3.2.6. Credit Approval Data

## 4. Conclusions

## Supplementary Materials

## Author Contributions

## Funding

## Acknowledgments

## Conflicts of Interest

## Abbreviations

GDF | Generalized distance function |

KM | K-medoids |

PAM | Partitioning around medoids |

SFKM | Simple and fast k-medoids |

SKM | Simple k-medoids |

## Appendix A

## References

- Gan, G.; Ma, C.; Wu, J. Data Clustering, Theory, Algorithm, and Application; The American Statistical Association; The Society for Industrial and Applied Mathematics: Philadelphia, PA, USA, 2007. [Google Scholar]
- Kaufman, L.; Rousseeuw, P.J. Finding Groups in Data; John Wiley and Sons Inc.: New York, NY, USA, 1990. [Google Scholar]
- Hartigan, J.A.; Wong, M.A. A K-Means Clustering Algorithm. J. R. Stat. Soc.
**1979**, 28, 100–108. [Google Scholar] - Wu, X.; Kumar, V.; Quinlan, J.R.; Ghosh, J.; Yang, Q.; Motoda, H.; McLachlan, G.J.; Ng, A.; Liu, B.; Yu, P.S.; et al. Top 10 algorithms in data mining. Knowl. Inf. Syst.
**2008**, 14, 1–37. [Google Scholar] [CrossRef] - Huang, Z. Clustering large datasets with mixed numeric and categorical values. In Proceedings of the First Pacific-Asia Conference on Knowledge Discovery and Data Mining; Springer: Berlin, Germany, 1997; pp. 21–34. [Google Scholar]
- Ahmad, A.; Dey, L. A K-mean clustering algorithm for mixed numeric and categorical data. Data Knowl. Eng.
**2007**, 63, 503–527. [Google Scholar] [CrossRef] - Harikumar, S.; Surya, P.V. K-Medoid Clustering for Heterogeneous DataSets. Procedia Comput. Sci.
**2015**, 70, 226–237. [Google Scholar] [CrossRef] [Green Version] - McCane, B.; Albert, M. Distance functions for categorical and mixed variables. Pattern Recognit. Lett.
**2008**, 29, 986–993. [Google Scholar] [CrossRef] [Green Version] - Gower, J.C. A General Coefficient of Similarity and Some of Its Properties. Biometrics
**1971**, 27, 857–871. [Google Scholar] [CrossRef] - Friedman, J.H.; Meulman, J.J. Clustering objects on subsets of attributes (with discussion). J. R. Stat. Soc. Ser. B
**2004**, 66, 815–849. [Google Scholar] [CrossRef] - Yin, J.; Tan, Z. Clustering Mixed Type Attributes in Large Dataset. In ISPA 2005: Parallel and Distributed Processing Application; Pan, Y., Chen, D., Guo, M., Cao, J., Dongarra, J., Eds.; Lecture Notes in Computer Science Volumn 3758; Springer: Berlin/Heidelberg, Germany, 2005; pp. 655–661. [Google Scholar]
- Bushel, P.R.; Wolfinger, R.D.; Gibson, G. Simultaneous clustering of gene expression data with clinical chemistry and pathological evaluations reveals phenotypic prototypes. BMC Syst. Biol.
**2007**, 1, 1–20. [Google Scholar] [CrossRef] - Ji, J.; Bai, T.; Zhou, C.; Ma, C.; Wang, Z. An improved k-prototypes clustering algorithm for mixed numeric and categorical data. Neurocomputing
**2013**, 120, 590–596. [Google Scholar] [CrossRef] - Liu, S.H.; Shen, L.Z.; Huang, D.C. A three-stage framework for clustering mixed data. WSEAS Trans. Syst.
**2016**, 15, 1–10. [Google Scholar] - Reynolds, A.P.; Richards, G.; De La Iglesia, B.; Rayward-Smith, V.J. Clustering Rules: A Comparison of Partitioning and Hierarchical Clustering Algorithms. J. Math. Model. Algorithms
**2006**, 5, 475–504. [Google Scholar] [CrossRef] - Park, H.; Jun, C. A simple and fast algorithm for K-medoids clustering. Expert Syst. Appl.
**2009**, 36, 3336–3341. [Google Scholar] [CrossRef] - Steinley, D. Local Optima in K-Means Clustering: What You Don’t Know May Hurt You. Psychol. Methods
**2003**, 8, 294–304. [Google Scholar] [CrossRef] - Pakhira, M.K. A Modified k-means Algorithm to Avoid Empty Clusters. Int. J. Recent Trends Eng.
**2009**, 1, 221–226. [Google Scholar] - Zadegan, S.M.R.; Mirzaie, M.; Sadoughi, F. Ranked k-medoids: A fast and accurate rank-based partitioning algorithm for clustering large datasets. Knowl.-Based Syst.
**2016**, 39, 133–143. [Google Scholar] [CrossRef] - Budiaji, W.; kmed: Distance-Based K-Medoids. R Package Version 0.3.0. 2019. Available online: http://CRAN.R-project.org/package=kmed (accessed on 15 June 2019).
- Steinley, D.; Brusco, M. Initializing K-means Batch Clustering: A Critical Evaluation of Several Techniques. J. Classif.
**2007**, 24, 99–121. [Google Scholar] [CrossRef] - Podani, J. Introduction to Exploration of Multivariate Biological Data; Backhuys Publishers: Leiden, The Netherlands, 2000. [Google Scholar]
- Wishart, D. K-Means Clustering with Outlier Detection, Mixed Variables and Missing Values. In Exploratory Data Analysis in Empirical Research, Proceedings of the 25th Annual Conference of the Gesellschaft für Klassifikation e.V., University of Munich, 14–16 March 2001; Springer: Berlin/Heidelberg, Germany, 2003; pp. 216–226. [Google Scholar]
- Qiu, W.; Joe, H. Generation of Random Clusters with Specified Degree of Separation. J. Classif.
**2006**, 23, 315–334. [Google Scholar] [CrossRef] - Qiu, W.; Joe, H. Separation Index and Partial Membership for Clustering. Comput. Stat. Data Anal.
**2006**, 50, 585–603. [Google Scholar] [CrossRef] - Hennig, C. Cluster-wise assessment of cluster stability. Comput. Stat. Data Anal.
**2007**, 52, 258–271. [Google Scholar] [CrossRef] [Green Version] - Lichman, M. UCI Machine Learning Repository; University of California: Irvine, CA, USA, 2013; Available online: http://archive.ics.uci.edu/ml (accessed on 17 July 2018).
- R Core Team. R: A Language and Environment for Statistical Computing; R Foundation for Statistical Computing: Vienna, Austria, 2015; Available online: https://www.R-project.org/ (accessed on 2 January 2017).
- Maechler, M.; Rousseeuw, P.; Struyf, A.; Hubert, M.; Hornik, K.; cluster: Cluster Analysis Basics and Extensions. R Package Version 2.0.6—For New Features, See the ‘Changelog’ File (in the Package Source). 2017. Available online: http://CRAN.R-project.org/package=cluster (accessed on 10 September 2018).
- Qiu, W.; Joe, H. clusterGeneration: Random Cluster Generation (with Specified Degree of Separation). R Package Version 1.3.4. 2015. Available online: http://CRAN.R-project.org/package=clusterGeneration (accessed on 23 August 2018).
- Leisch, L.; Dimitriadou, D.; Gruen, B.; flexclust: Flexible Cluster Algorithms. R Package Version 1.4-0. 2018. Available online: http://CRAN.R-project.org/package=flexclust (accessed on 15 January 2019).
- Hornik, K.; Böhm, W.; clue: Cluster Ensembles. R Package Version 0.3-57. 2019. Available online: http://CRAN.R-project.org/package=clue (accessed on 7 May 2019).
- Wickham, H. ggplot2: Elegant Graphics for Data Analysis; Springer: New York, NY, USA, 2016. [Google Scholar]
- Auguie, B.; Antonov, A.; gridExtra: Miscellaneous Functions for “Grid” Graphics. R Package Version 2.3. 2017. Available online: http://CRAN.R-project.org/package=gridExtra (accessed on 1 May 2018).
- Leisch, F. A toolbox for K-centroids cluster analysis. Comput. Stat. Data Anal.
**2006**, 51, 526–544. [Google Scholar] [CrossRef] - Ahmad, A.; Dey, L. K-means type clustering algorithm for subspace clustering of mixed numeric and categorical datasets. Pattern Recognit. Lett.
**2011**, 32, 1062–1069. [Google Scholar] [CrossRef]

**Figure 1.**The simple and fast k-medoids (SFKM) (

**a**) and simple k-medoids (SKM) (

**b**) partitioning results.

**Figure 3.**Benchmarking of PAM and SKM for $k=3$ (

**a**) and $k=10$ (

**b**) with $set.seed\phantom{\rule{0.166667em}{0ex}}\left(2018\right)$.

**Figure 4.**Benchmarking of PAM and SKM for $k=3$ (

**a**) and $k=10$ (

**b**) with $set.seed\phantom{\rule{0.166667em}{0ex}}\left(2019\right)$.

GDF | $\mathit{\omega}$ | $\mathit{\alpha}$ | $\mathit{\beta}$ | $\mathit{\gamma}$ | ${\mathit{\delta}}_{\mathit{n}}({\mathit{x}}_{\mathit{ir}},{\mathit{x}}_{\mathit{jr}})$ | ${\mathit{\delta}}_{\mathit{b}}({\mathit{x}}_{\mathit{it}},{\mathit{x}}_{\mathit{jt}})$ | ${\mathit{\delta}}_{\mathit{c}}({\mathit{x}}_{\mathit{is}},{\mathit{x}}_{\mathit{js}})$ |
---|---|---|---|---|---|---|---|

Gower | 1 | $\frac{1}{{p}_{n}+{p}_{b}+{p}_{c}}$ | $\frac{{p}_{b}}{{p}_{n}+{p}_{b}+{p}_{c}}$ | $\frac{{p}_{c}}{{p}_{n}+{p}_{b}+{p}_{c}}$ | M rw | SM | SM |

Wishart | $\frac{1}{2}$ | $\frac{1}{{p}_{n}+{p}_{b}+{p}_{c}}$ | $\frac{{p}_{b}}{{p}_{n}+{p}_{b}+{p}_{c}}$ | $\frac{{p}_{c}}{{p}_{n}+{p}_{b}+{p}_{c}}$ | SE vw | SM | SM |

Podani | $\frac{1}{2}$ | 1 | ${p}_{b}$ | ${p}_{c}$ | SE ${r}^{2}$w | SM | SM |

Huang | 1 | 1 | $\overline{{s}_{n}}$ | $\overline{{s}_{n}}$ | SE | H | H |

Harikumar-PV | 1 | 1 | 1 | 1 | M | H | CoC |

**Table 2.**Two examples of the mixed variable distances derived from the generalized distance function (GDF).

GDF | $\mathit{\omega}$ | $\mathit{\alpha}$ | $\mathit{\beta}$ | $\mathit{\gamma}$ | ${\mathit{\delta}}_{\mathit{n}}({\mathit{x}}_{\mathit{ir}},{\mathit{x}}_{\mathit{jr}})$ | ${\mathit{\delta}}_{\mathit{b}}({\mathit{x}}_{\mathit{it}},{\mathit{x}}_{\mathit{jt}})$ | ${\mathit{\delta}}_{\mathit{c}}({\mathit{x}}_{\mathit{is}},{\mathit{x}}_{\mathit{js}})$ |
---|---|---|---|---|---|---|---|

Esimma | 1 | 1 | 1 | 1 | E | SM | SM |

Marweco | 1 | 1 | 1 | 1 | M rw | CoC | CoC |

Data Set | n | ${\mathit{p}}_{\mathit{n}}$ | ${\mathit{p}}_{\mathit{b}}$ | ${\mathit{p}}_{\mathit{c}}$ | k |
---|---|---|---|---|---|

Iris | 150 | 4 | 0 | 0 | 3 |

Wine | 178 | 13 | 0 | 0 | 3 |

Soybean | 47 | 0 | 0 | 35 | 4 |

Vote | 435 | 0 | 0 | 16 | 2 |

Zoo | 101 | 1 | 15 | 0 | 7 |

Credit approval | 653 | 6 | 0 | 9 | 2 |

Domination | Algorithm | Gower | Wishart | Podani | Huang | Harikumar | Esimma | Marweco |
---|---|---|---|---|---|---|---|---|

Numerical | PAM | 0.74 | 0.82 | 0.73 | 0.84 | 0.83 | 0.85 | 0.82 |

SFKM | 0.72 ${}^{a}$ | 0.73 ${}^{a}$ | 0.71 ${}^{a}$ | 0.74 ${}^{a}$ | 0.71 ${}^{a}$ | 0.76 ${}^{a}$ | 0.71 ${}^{a}$ | |

SKM | 0.74 | 0.80 | 0.73 | 0.84 | 0.83 | 0.85 | 0.82 | |

Binary | PAM | 0.78 | 0.78 | 0.77 | 0.77 | 0.76 | 0.73 | 0.74 |

SFKM | 0.74 ${}^{a}$ | 0.73 ${}^{a}$ | 0.73 ${}^{a}$ | 0.74 ${}^{a}$ | 0.72 ${}^{a}$ | 0.73 ${}^{c}$ | 0.73 ${}^{c}$ | |

SKM | 0.78 | 0.77 | 0.78 | 0.77 | 0.75 | 0.74 | 0.75 | |

Categorical | PAM | 0.78 | 0.76 | 0.78 | 0.69 | 0.68 | 0.67 | 0.74 |

$c=3$ | SFKM | 0.71 ${}^{a}$ | 0.72 ${}^{a}$ | 0.71 ${}^{a}$ | 0.70 ${}^{a}$ | 0.68 | 0.67 | 0.70 ${}^{a}$ |

SKM | 0.77 | 0.75 | 0.78 | 0.69 | 0.68 | 0.67 | 0.74 | |

Categorical | PAM | 0.77 | 0.82 | 0.77 | 0.79 | 0.78 | 0.76 | 0.77 |

$c=5$ | SFKM | 0.71 ${}^{a}$ | 0.75 ${}^{a}$ | 0.71 ${}^{a}$ | 0.79 | 0.79 | 0.76 | 0.69 ${}^{a}$ |

SKM | 0.76 | 0.81 | 0.77 | 0.79 | 0.78 | 0.76 | 0.77 |

k | Algorithm | Gower | Wishart | Podani | Huang | Harikumar | Esimma | Marweco |
---|---|---|---|---|---|---|---|---|

3 | PAM | 0.83 | 0.87 | 0.81 | 0.81 | 0.83 | 0.84 | 0.88 |

SFKM | 0.79 ${}^{a}$ | 0.83 ${}^{a}$ | 0.76 ${}^{a}$ | 0.79 ${}^{a}$ | 0.81 | 0.81 ${}^{a}$ | 0.84 ${}^{a}$ | |

SKM | 0.83 | 0.87 | 0.81 | 0.79 | 0.81 | 0.84 | 0.88 | |

4 | PAM | 0.76 | 0.88 | 0.74 | 0.89 | 0.90 | 0.90 | 0.84 |

SFKM | 0.72 ${}^{a}$ | 0.84 ${}^{a}$ | 0.73 | 0.86 ${}^{a}$ | 0.86 ${}^{a}$ | 0.88 ${}^{a}$ | 0.80 ${}^{a}$ | |

SKM | 0.77 | 0.88 | 0.74 | 0.89 | 0.90 | 0.90 | 0.84 | |

8 | PAM | 0.82 | 0.91 | 0.81 | 0.91 | 0.90 | 0.91 | 0.88 |

SFKM | 0.80 ${}^{a}$ | 0.87 ${}^{a}$ | 0.80 ${}^{a}$ | 0.89 ${}^{a}$ | 0.87 ${}^{a}$ | 0.89 ${}^{a}$ | 0.84 ${}^{a}$ | |

SKM | 0.81 | 0.90 | 0.81 | 0.91 | 0.90 | 0.90 | 0.87 | |

10 | PAM | 0.86 | 0.92 | 0.85 | 0.91 | 0.91 | 0.91 | 0.89 |

SFKM | 0.84 ${}^{a}$ | 0.88 ${}^{a}$ | 0.85 ${}^{a}$ | 0.90 ${}^{a}$ | 0.88 ${}^{a}$ | 0.90 ${}^{a}$ | 0.85 ${}^{a}$ | |

SKM | 0.85 | 0.90 | 0.85 | 0.91 | 0.90 | 0.91 | 0.87 |

p | Algorithm | Gower | Wishart | Podani | Huang | Harikumar | Esimma | Marweco |
---|---|---|---|---|---|---|---|---|

6 | PAM | 0.85 | 0.77 | 0.83 | 0.69 | 0.69 | 0.69 | 0.79 |

SFKM | 0.79 ${}^{a}$ | 0.75 ${}^{b}$ | 0.81 ${}^{a}$ | 0.70 ${}^{b}$ | 0.70 ${}^{c}$ | 0.70 | 0.77 ${}^{c}$ | |

SKM | 0.84 | 0.76 | 0.84 | 0.69 | 0.69 | 0.69 | 0.79 | |

8 | PAM | 0.80 | 0.93 | 0.79 | 0.90 | 0.90 | 0.92 | 0.86 |

SFKM | 0.74 ${}^{a}$ | 0.86 ${}^{a}$ | 0.75 ${}^{a}$ | 0.88 ${}^{a}$ | 0.84 ${}^{a}$ | 0.88 ${}^{a}$ | 0.79 ${}^{a}$ | |

SKM | 0.80 | 0.93 | 0.79 | 0.91 | 0.90 | 0.92 | 0.86 | |

10 | PAM | 0.74 | 0.77 | 0.73 | 0.76 | 0.76 | 0.76 | 0.76 |

SFKM | 0.71 ${}^{a}$ | 0.73 ${}^{a}$ | 0.71 ${}^{a}$ | 0.74 ${}^{a}$ | 0.73 ${}^{a}$ | 0.74 ${}^{a}$ | 0.70 ${}^{a}$ | |

SKM | 0.74 | 0.76 | 0.73 | 0.75 | 0.76 | 0.76 | 0.75 | |

14 | PAM | 0.69 | 0.85 | 0.68 | 0.95 | 0.95 | 0.95 | 0.89 |

SFKM | 0.66 ${}^{a}$ | 0.78 ${}^{a}$ | 0.66 ${}^{a}$ | 0.85 ${}^{a}$ | 0.81 ${}^{a}$ | 0.86 ${}^{a}$ | 0.79 ${}^{a}$ | |

SKM | 0.69 | 0.85 | 0.67 | 0.95 | 0.95 | 0.95 | 0.89 |

n | Algorithm | Gower | Wishart | Podani | Huang | Harikumar | Esimma | Marweco |
---|---|---|---|---|---|---|---|---|

100 | PAM | 0.75 | 0.87 | 0.73 | 0.94 | 0.93 | 0.95 | 0.91 |

SFKM | 0.72 ${}^{b}$ | 0.84 ${}^{a}$ | 0.73 | 0.89 ${}^{a}$ | 0.88 ${}^{a}$ | 0.89 ${}^{a}$ | 0.85 ${}^{a}$ | |

SKM | 0.74 | 0.88 | 0.73 | 0.94 | 0.93 | 0.94 | 0.91 | |

500 | PAM | 0.76 | 0.93 | 0.75 | 0.91 | 0.90 | 0.92 | 0.92 |

SFKM | 0.74 ${}^{a}$ | 0.88 ${}^{a}$ | 0.74 | 0.88 ${}^{a}$ | 0.87 ${}^{a}$ | 0.90 ${}^{a}$ | 0.89 ${}^{a}$ | |

SKM | 0.76 | 0.93 | 0.75 | 0.91 | 0.89 | 0.92 | 0.92 | |

1000 | PAM | 0.75 | 0.88 | 0.75 | 0.86 | 0.89 | 0.89 | 0.85 |

SFKM | 0.73 ${}^{a}$ | 0.81 ${}^{a}$ | 0.74 | 0.86 | 0.86 ${}^{a}$ | 0.87 ${}^{a}$ | 0.84 | |

SKM | 0.76 | 0.88 | 0.74 | 0.86 | 0.89 | 0.89 | 0.85 | |

2000 | PAM | 0.77 | 0.96 | 0.77 | 0.98 | 0.98 | 0.98 | 0.95 |

SFKM | 0.76 ${}^{a}$ | 0.91 ${}^{a}$ | 0.76 ${}^{a}$ | 0.97 | 0.95 ${}^{a}$ | 0.97 | 0.92 ${}^{a}$ | |

SKM | 0.78 | 0.96 | 0.77 | 0.98 | 0.98 | 0.98 | 0.95 |

Dataset | Type | SKM | Other Algorithms |
---|---|---|---|

Iris | Numerical | 95.3 | 94.7 [6] ${}^{a}$, 82.2 [13]${}^{b}$ |

Wine | Numerical | 92.7 | 70.3 [19] ${}^{c}$ |

Soybean | Categorical | 100 | 91.0 [13] ${}^{b}$, 81.0 [16]${}^{d}$ |

Vote | Categorical | 87.8 | 86.7 [6] ${}^{a}$ |

Zoo | Mixed | 82.2 | 78.7 [19] ${}^{c}$ |

Credit approval | Mixed | 82.7 | 77.9 [13] ${}^{b}$ |

© 2019 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).

## Share and Cite

**MDPI and ACS Style**

Budiaji, W.; Leisch, F.
Simple K-Medoids Partitioning Algorithm for Mixed Variable Data. *Algorithms* **2019**, *12*, 177.
https://doi.org/10.3390/a12090177

**AMA Style**

Budiaji W, Leisch F.
Simple K-Medoids Partitioning Algorithm for Mixed Variable Data. *Algorithms*. 2019; 12(9):177.
https://doi.org/10.3390/a12090177

**Chicago/Turabian Style**

Budiaji, Weksi, and Friedrich Leisch.
2019. "Simple K-Medoids Partitioning Algorithm for Mixed Variable Data" *Algorithms* 12, no. 9: 177.
https://doi.org/10.3390/a12090177