Next Article in Journal
Unpredictable Oscillations for Hopfield-Type Neural Networks with Delayed and Advanced Arguments
Previous Article in Journal
An Enhancing Differential Evolution Algorithm with a Rank-Up Selection: RUSDE
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Feature Selection for Colon Cancer Detection Using K-Means Clustering and Modified Harmony Search Algorithm

College of IT Convergence, Gachon University, Seongnam 13120, Korea
*
Author to whom correspondence should be addressed.
Mathematics 2021, 9(5), 570; https://doi.org/10.3390/math9050570
Submission received: 3 February 2021 / Revised: 2 March 2021 / Accepted: 3 March 2021 / Published: 7 March 2021

Abstract

:
This paper proposes a feature selection method that is effective in distinguishing colorectal cancer patients from normal individuals using K-means clustering and the modified harmony search algorithm. As the genetic cause of colorectal cancer originates from mutations in genes, it is important to classify the presence or absence of colorectal cancer through gene information. The proposed methodology consists of four steps. First, the original data are Z-normalized by data preprocessing. Candidate genes are then selected using the Fisher score. Next, one representative gene is selected from each cluster after candidate genes are clustered using K-means clustering. Finally, feature selection is carried out using the modified harmony search algorithm. The gene combination created by feature selection is then applied to the classification model and verified using 5-fold cross-validation. The proposed model obtained a classification accuracy of up to 94.36%. Furthermore, on comparing the proposed method with other methods, we prove that the proposed method performs well in classifying colorectal cancer. Moreover, we believe that the proposed model can be applied not only to colorectal cancer but also to other gene-related diseases.

1. Introduction

Colorectal cancer (CRC) is the third most common cause of cancer mortality and accounts for 11% of all cancer diagnoses worldwide [1,2]. Gender-wise, CRC is the third most common cancer among men and the second most among women [3]. Furthermore, as the incidence rate of young people with CRC is gradually increasing, the average age of people with CRC is also decreasing. The average age for CRC diagnosis in the United States was 72 years between 2001 and 2002, which decreased to 66 years between 2015 and 2016 [4]. Therefore, the importance of early diagnosis of CRC is being increasingly felt.
The major causes of CRC are smoking, obesity, and poor lifestyle and eating habits, all of which are acquired factors. It has been statistically shown that the risk for CRC is higher in developed countries [5]. Excessive consumption of animal fat and meat, especially red meat, acts as a risk factor for CRC. Nevertheless, cancer incidence due to various acquired or environmental factors can be substantially reduced by changing lifestyle patterns.
Meanwhile, genetic factors account for 10–30% of all CRC cases. However, the incidence of CRC due to genetic factors is significantly higher than that due to acquired factors. Representative examples include familiar adenomatous polyposis (FAP) and hereditary non-polyposis colorectal cancer (HNPCC). FAP causes several or thousands of adenomas to develop on the wall of the colon, and almost 100% of them develop into cancer in adulthood. Considering 95% of patients develop cancer before 45 years of age, prevention through early diagnosis is necessary. HNPCC develops at an early age and is more common than FAP, and the risk of CRC in immediate family members increases by 2–3 times [6]. Therefore, it is important to identify through testing the genes involved in the development of CRC.
Cancer is caused by genetic mutations in normal cells. Genes, the unit of function in human DNA, encode proteins, and these proteins then determine cell functions [7]. Gene expression refers to the process of producing a protein, the final product of DNA. Genetic information is transcribed into mRNA and translated by the amino acid sequence of the protein. Translated genetic information catalyzes biological reactions or forms of specific structures and is expressed in cells and individuals. During this process, when a gene becomes abnormal, it creates the wrong protein and mutations take place. CRC due to genetic reasons is caused when one such process occurs. Mutant genes that cause disease can be identified through special genetic tests. These state-of-the-art tests enable early diagnosis, treatment, and active prevention, but they are expensive and suffer from the disadvantage that the patient has to wait for approximately a month for the test results. In addition, it is not easy to identify the mutant gene using these tests as the probability of having a gene that causes CRC is 3–5%, considering the total number of genes that make up the human body [8]. It is difficult to choose a small number of genes compared to the high cost of genetic testing and the total number of genes. Our proposed method can overcome the aforementioned difficulties and help diagnose genetic CRC.
We propose the following feature selection method. First, candidate genes are selected for distribution between normal and abnormal classes using the Fisher score [9]. Based on the data selected as a subset, K-means clustering is performed and representative genes for each cluster are found [10]. Subsequently, using the harmony search (HS) algorithm, representative genes are searched for the optimal combination, which leads to high classification accuracy by using only a few genes [11].

2. Related Work

DNA information is an important factor in predicting genetic diseases. However, diagnosis can be difficult in unpredictable situations due to the large amount of data or genetic mutations. In recent years, with the progress made in the field of artificial intelligence, research on predicting diseases using only biological data has been actively conducted. Several studies have predicted CRC using the information on CRC genes published by the Princeton University Gene Expression Project.
In the above study, data were analyzed with random ensembles, and a support vector machine (SVM) was used as a classifier to predict CRC based on cancer gene information [12]. They created a random ensemble application using a new C++ class and the NEURObjects library [13].
There is also a study on feature selection using K-means clustering [14], wherein classification performance was compared using known methods, such as mRMR, Clustering+mRNR, SVM-RFE, Clustering+SVM-RFE, HSIC-LASSO, and Clustering+HSIC-LASSO.

3. Materials and Methods

For the 6500 human genes provided in [15], the expression levels of 40 tumors and 22 normal colon tissues were used. In this study, we used the information of 2000 genes with the highest minimum intensity among all samples. We attempted to classify CRC using the information on 2000 CRC genes from 62 people provided by the Princeton University Gene Expression Project. All data used in the experiment is either 3′ UTR or gene. 3′ UTR strictly controls gene expression in normal cells [16].
Figure 1 represents the step-by-step process proposed in this paper. The parameters and their corresponding values used in the experiment are described in the process of each step.
First, the data are normalized using Z-normalization [17]. Candidate genes are selected using the Fisher score with normalized data. Next, candidate genes selected using K-means clustering are classified, after which representative genes to be used for CRC prediction are selected within each cluster. Finally, the selected representative genes are searched for multiple gene combinations using the HS method. The combination obtained is then verified using a 5-fold validation method.
• Z-normalization
Normalization plays a role in reflecting all data values with the same degree of importance. The formula for Z-normalization is the same as in (1), where x is the original normalized value, μ is the mean of the data, and σ is the standard deviation of the data. As for the normalized value, the mean of the genetic information values has a significant influence on the normalization. If the extracted value matches the mean of the genetic information, it is normalized to zero. If the extracted value is less than the mean, it is normalized to a negative number, and if the extracted value is greater than the mean, it is normalized to a positive number. The normalized negative and positive numbers are determined by the standard deviation of the genetic information value. If the range of the data values is large, that is, if the standard deviation is large, the normalized value approaches 0.
z =   x   μ σ
We normalized by substituting the original genetic information value into (1). Table 1 shows the values applied with Z-normalization of Attribute 1—one of each genetic information values for each patient. The number of patients included in the actual experiment was 62, but Table 1 only shows, as an example, the value of Attribute 1 for 12 patients. The average of Attribute 1 gene information of 62 people was 7015.78671. The average of Attribute 1 is subtracted from the patient’s genetic information value and divided by the standard deviation of the gene information, 3092.970584. As a result of this, a normalized number is obtained, as listed in Table 1, which applies to all data.
• Fisher Score
The combination to consider for selecting a small number of genes that distinguish colorectal cancer with 2000 genetic information values is near infinite. The main purpose of this process is to select candidate genes that are easy to classify using the Fisher score. Incidentally, this process reduces the number of combinations and serves as the basis for the selection of representative genes. It also reduces redundancy for genes with similar characteristics and reduces the time complexity for experiments. The Fisher score is one of Newton’s methods and is used for maximum likelihood estimation in statistics [18]. The score calculated using the Fisher score is represented by (2). X i A ¯ and X i B ¯ indicate the average gene information value of gene i for a normal person and a person with a cancer gene, respectively, and σ i A and σ i B indicate the standard deviation of the gene i for a normal person and a person with a cancer gene, respectively.
S i =   ( X i A ¯     X i B ¯ ) 2 ( σ i A ) 2 +   ( σ i B ) 2
Here, the i data refer to the gene information values of normal people and patients with CRC gene information. As the original data are already labeled for classification, the level of expression of gene i for classification can be evaluated. As the Fisher score increases, the difference between the distribution of the i th class and the j th class also increases. Therefore, we selected the top 1000 genes in the order of the highest Fisher score as candidate genes to be used in the next feature selection step.
• K-means Clustering
We used K-means clustering, an unsupervised learning method, to find representative genes from 1000 candidate genes selected using the Fisher score. In K-means clustering, clusters are created based on the nearest centroid, that is, the mean, in a group. Here, K-means clustering is carried out using the average of the data. When n data of (x1, x2, ..., xn) are divided into k clusters, the process can be expressed as
C =   i = 1 k i = 1 k r n k | | x n   u k | | 2
From each cluster’s data, the sum of the distances to the mean of the cluster is squared, and each value must be obtained when C becomes the minimum. Uk means that the vector belongs to the kth cluster and is placed in the center of the kth cluster. Therefore, the first Uk is an arbitrary initial value and is the center of the cluster. After fixing the Uk value, the rnk value that minimizes C is found. When xn belongs to the kth cluster, the value of rnk is 1; otherwise, it is 0. When the value of rnk is obtained, the newly obtained value of rnk is fixed and Uk is determined again. This process is repeated for a predetermined number of times or until the result of repetitive learning becomes meaningless.
In this study, the number of clusters was set to 20. The cluster consists of samples divided for the classification of CRC. Using all 1000 genes for feature selections, 20 representative genes were selected to account for diversity. In each cluster, the gene whose information data were closest to the median value was designated as the representative gene of the cluster. We used the cosine distance to calculate the distance between the data and the median. The cosine distance between u and v can be calculated using (4). The weights for each value is u and v . We compute the cosine distance using a scipy.spatial.distance library.
w =   1     u v | | u | | 2 | | v | | 2
• Modified Harmony Search (MHS)
The HS algorithm is an evolutionary computation algorithm inspired by the process involved in musicians’ improvising a harmony. Harmony Search is being applied to research using biodata. Hickmann et al. conducted a weekly prediction of seasonal influenza based on Wikipedia access and CDC influenza-like illness (ILI) reports [19]. They formed 50% and 95% confidence intervals for the 2013–2014 ILI observations. In the HSWOA method that combines HS and WOA (Whale Optimization Algorithm), a study was conducted to show the accuracy of hybridization reactions through DNA sequence [20]. Comparative analysis was conducted with NACST/.Seq [21], DEPT [22], H-MO-TLBO [23], and MO-ABC [24], and the average fitness of HSWOA was higher than that of the four algorithms. Additionally, there is a COA-HS algorithm that combines Harmony Search with cancer gene selection [25]. Their algorithm seeks to overcome the dimensional curse problem and is aimed at selecting meaningful genes. There is also a study proposing a metaheuristic harmony search algorithm that effectively predicts the structure of RNA as well as DNA [26]. Harmony Search is also applied to studies to reduce hand tremors for Parkinson’s disease rehabilitation and the intensity of magnetic fields transmitted to the brain [27]. In this study, the existing HS process was modified and used as a feature selection method. The existing HS algorithm involves a total of four steps.
Step 1. Initializing parameters and harmony memory
The first step is to initialize the variables and harmony so as to implement the harmony memory. To use this algorithm, we need to know the meaning of the parameters. As HS is an evolutionary algorithm, it can be compared to a genetic algorithm. The genes, which are basic elements of the chromosome in the genetic algorithm, are the same as the musical tones, which are the basic elements of a harmony vector. Harmony memory size (HMS) refers to the number of harmonies in one harmony memory. Harmony vectors are randomly initialized at the start of the HS method implementation, and previous harmony values are used when an iteration is performed later.
Step 2. Creating a new harmony
This is the stage where one can adjust the ratio for combination and create a new harmony and obtain a wide range of combinations. A group of harmonies as many as HMS is created within one harmony memory. One harmony vector is randomly selected within the same location of each harmony memory. The selected harmony vector becomes a new harmony vector at the corresponding position. New values at a location corresponding to each variable in the harmony are gathered to create a new harmony. Harmony memory considering rate (HMCR) is a probability value for creating a new harmony mentioned in the above process. 1-HMCR is the probability of randomly initializing a harmony vector when creating the first harmony, after which a new harmony is created and added to the harmony memory. The pitch adjusting rate (PAR) is the probability of providing a variation to the harmony vector. This is to obtain a diverse set of combinations.
Step 3. Updating harmony memory
In this step, the newly generated harmony vector is evaluated. The importance of the harmony is tested based on the objective function value (fitness value) of the harmony. If the new harmony vector generated in Step 2 has better function value than the worst fit one in the harmony memory, the new vector is included in the harmony memory and the lowest one is removed.
Step 4. Repeating Steps 2 and 3.
Steps 2 and 3 are repeated as many times as the specified iteration. With each iteration, the harmony with the lowest fitness is removed, and thus, various combinations are generated with the harmony of high fitness.
However, we propose a new method of feature selection by modifying the existing HS. The related pseudocode is shown in Algorithm 1.
Algorithm 1. Pseudocode of Modified Harmony Search Algorithm
1. Set the parameters BDR, HMS, HMCR, PAR
2. Set itr: =0 {iteration initialization}
3. Initialize Harmony with 0 and 1 (binary value)
4. BDR = HMS*0.2 //set the upper and lower area boundary
5. For (i = 1: i ≤ HMS) then
6.    generate initial Harmony
7. End for
8. Repeat
9.    For ( J = 1: N) then                 //Harmony search in upper area
10.      x n e w = Randomly select from x 1 J to x ( B D R ) J
11.    end for
12.    generate new Harmony ( x n e w )
13.    If (Rand(0,1) < HMCR) then         //Harmony search in lower area
14.      For ( J = 1: N) then
15.        x n e w =   Randomly select from x ( B D R + 1 ) J to x ( H M S ) j
16.        If (Rand(0,1) < PAR) then
17.          x n e w =   | x n e w 1 |
18.      end if
19.     end for
20.     generate new Harmony ( x n e w )
21.    else
22.      generate new Harmony randomly
23.    end if
24.    if (fit( H M n e w ( u p p e r ,     l o w e r ) ) < fit(   H M o l d ))
25.      update harmony memory
26.    end if
27.      set itr+=1
28. until (itr < maxit)
29. Get the best harmony
Step 1. Initializing variable and harmony
To create a combination with 20 representative genes, the harmony vector is first initialized to 0 and 1. Zero means that the representative gene information value in the index is not used as a feature for classification, and 1 means that it is used as a feature for classification. HMCR is 0.9, PAR is 0.1, and the number of iterations (itr) is 500. HMS is set to 30.
Step 2. Creating new harmony and dividing harmony memory
This step is a modified part of the existing HS for this study. The process of creating a new harmony memory is the same as the existing HS algorithm, but the experiment was conducted by dividing the harmony memory into two areas, as shown in Figure 2.
The upper area is composed of harmonies having the fitness of the top 20% within one harmony memory. HMCR and PAR are not used for this area. Therefore, new harmony is not added by initialization. Rather than creating the diversity of combinations, when the combination is recombined within the harmony of the upper area, a combination of higher fit could be found, after which new harmonies are created. In the second area, which is the lower area in harmony memory, new harmonies are created using the existing HS algorithm, that is, by using HMCR and PAR.
Step 3. Updating harmony memory
Goodness-of-fit is the classification accuracy obtained by applying the classification model used in the paper with the combination selected from the harmony. The fit is calculated according to each harmony value and is arranged in the order of the harmony with high fitness. As two new harmonies are created in Step 2, the two old harmonies with the lowest fit that are aligned as shown in Figure 3 are removed to match the size of the HMS that was initially specified.
Step 4. Repeating Steps 2 and 3
There is no newly modified process at this stage. Repeat Steps 2 and 3 as many as the iteration. As the number of repetitions increases, the upper region finds harmonies with a higher degree of fitness within the combination with higher suitability, whereas the lower region maintains the advantages of the existing HS, that is, finding combinations according to diversity. As the number of iterations increases, the highest classification accuracy of two areas within one harmony memory is stored in a text file, and the accuracy changes as the iterations’ progress is confirmed.
• Classification and Validation
We used an artificial neural network (ANN) as a classifier [28]. An ANN is a network created by abstracting neurons in the brain. Figure 4 shows the structure of the ANN used in our study. The input and the hidden layers are composed of five nodes. The output layer consists of one node, and the sigmoid function is used as the activation function.
We used K-fold cross-validation as an experimental verification technique [29]. All data were used as a test set at least once to increase the reliability of data verification. Figure 5 shows the process of training and testing data divided using 5-fold cross validation. Furthermore, the combination of features selected for HS was verified through 5-fold cross-validation.

4. Results

A total of 1000 candidate genes selected out of 2000 genes through the Fisher score were divided into 20 clusters by using K-means clustering. The optimal number of clusters was determined using the inertia value in the scikit. Figure 6 shows the inertia value according to the number of clusters. The lower the inertia value, the closer the distance between the values inside the cluster and the centroid. The smaller the inertia value, the higher the degree of aggregation of the data in the cluster can be evaluated. However, too many clusters can confuse the classification.
The representative genes selected through cosine distance from 20 clusters are as follows: attribute357, attribute457, attribute750, attribute722, attribute1635, attribute982, attribute936, attribute1897, attribute1515, attribute316, attribute1069, attribute1170, attribute158, attribute737, attribute640, attribute482, attribute109, attribute980, attribute43, and attribute1244. Table 2 summarizes the gene information of 62 gene values for 20 representative genes. All values in Table 2 are displayed only to four decimal places. Each row represents a gene value according to a patient’s attribute, and each column represents a patient’s gene information value for each attribute.
We selected eight genes from 20 representative genes using the HS feature selection method. The selected genes were attribute43 (ribosomal protein; Nicotiana tabacum), attribute737 (monoamine oxidase B), attribute936 (proteasome component), attribute1170 (GST1-Hs mRNA for GTP-binding protein), attribute1244 (mRNA for upstream binding factor), attribute1515 (grancalcin mRNA), attribute1635 (vasoactive intestinal peptide mRNA), attribute1897 (zinc finger protein mRNA), and the classification accuracy by using the ANN was 93.46%. Each attribute is closely related to CRC or cancer, and the evidence for this is supported by several studies [30,31,32,33,34,35,36].

5. Comparisons with Other Method Surveys

Many researchers have experimented with various classification algorithms using the colon cancer data provided by the Princeton University Gene Expression Project. Table 3 shows the number of genes selected in the present study in relation to other studies and the corresponding classification accuracies. As the range of accuracy can cause ambiguity in comparison, the representative accuracy of the research papers is shown. There are comparative papers that perform classification without using feature selection. Furthermore, there are studies that have used random forest (RF) algorithm [37], support vector machine (SVM) models [13], two-way clustering [38], and LogitBoot for 10-cross validation on the data provided by the Princeton University Gene Expression Project [39]. In addition, there are studies that derive classification accuracy through feature selection by using the Chameleon algorithm [40] and supervised group Lasso [41].
The proposed method achieved the highest accuracy when compared with other studies, regardless of features being selected and no features being selected. The Chameleon algorithm selected the fewest features among the comparative studies. However, our proposed method achieved better accuracy compared with the Chameleon algorithm (93.46% vs. 85.48%, respectively).

6. Conclusions and Future Works

In this study, in order to classify CRC using gene information, a hybrid method of normalizing gene information values using Z-normalization, reducing redundant genes using the Fisher score, selecting representative genes using K-means clustering, and feature selection using the HS algorithm was proposed. In K-means clustering, selecting representative genes using the cosine distance is straightforward and effective. The feature selection method modified from the original HS algorithm maintains high accuracy and improves classification performance by applying various combinations to the model. The experimental results showed a classification performance of 93.46% with only eight genes selected using the proposed method: attribute1635, attribute936, attribute1897, attribute1515, attribute1170, attribute737, attribute43, attribute1244. This can lead to cost-effectiveness due to fewer genetic tests. In addition, the results of the present study will greatly contribute in the prediction of not only the CRC gene but also various other genes causing diseases. For example, hereditary breast or ovarian cancer can also be predicted through genetic testing using the proposed method [42,43]. It is important to confirm the likelihood of a cancer gene through genetic testing for people with a family history of cancer-related diseases or for people who are likely to develop cancer. Therefore, research to predict cancer by finding a small number of genes according to gene mutations will be actively conducted in the future. There is a possibility of conducting experiments in different ways. For example, we can analyze genetic data used in our paper using other methods including single-particle tracking experiments. Additionally, our proposed methods can be applied to cancer-tracking time series data or non-genetic data (dietary, smoking or exercise) as well as genetic data to increase the objectivity and suitability of our model and data [44,45].

Author Contributions

Conceptualization, J.H.B.; methodology, J.H.B. and M.K.; software and experiments, J.H.B. and M.K.; writing—original draft preparation, J.H.B.; writing—review and editing, J.H.B. and Z.W.G.; supervision, J.S.L. and Z.W.G.; funding acquisition, Z.W.G. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by a National Research Foundation of Korea (NRF) grant funded by the Korean government (MSIT) (2020R1A2C1A01011131). This research was also supported by the Energy Cloud R&D Program through the National Research Foundation of Korea (NRF) funded by the Ministry of Science, ICT (2019M3F2A1073164).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Bray, F.; Ferlay, J.; Soerjomataram, I.; Siegel, R.L.; Torre, L.A.; Jemal, A. Global cancer statistics 2018: GLOBOCAN estimates of incidence and mortality worldwide for 36 cancers in 185 countries. CA A Cancer J. Clin. 2018, 68, 394–424. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  2. Ferlay, J.; Ervik, M.; Lam, F.; Colombet, M.; Mery, L.; Piñeros, M.; Bray, F. Global Cancer Observatory: Cancer Today; International Agency for Research on Cancer: Lyon, France, 2018. [Google Scholar]
  3. Center, M.M.; Jemal, A.; Smith, R.A.; Ward, E. Worldwide Variations in Colorectal Cancer. CA A Cancer J. Clin. 2009, 59, 366–378. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  4. Siegel, R.L.; Fedewa, S.A.; Anderson, W.F.; Miller, K.D.; Ma, J.; Rosenberg, P.S.; Jemal, A. Colorectal Cancer Incidence Patterns in the United States, 1974–2013. J. Natl. Cancer Inst. 2017, 109. [Google Scholar] [CrossRef] [Green Version]
  5. Rawla, P.; Sunkara, T.; Barsouk, A. Epidemiology of colorectal cancer: Incidence, mortality, survival, and risk factors. Gastroenterol. Rev. 2019, 14, 89–103. [Google Scholar] [CrossRef] [PubMed]
  6. Soravia, C.; Bapat, B.; Cohen, Z. Familial adenomatous polyposis (FAP) and hereditary nonpolyposis colorectal cancer (HNPCC): A review of clinical, genetic and therapeutic aspects. Schweiz. Med. Wochenschr. 1997, 127, 682. [Google Scholar] [PubMed]
  7. National Center for Biotechnology Information (US). Genes and Disease. Bethesda (MD): National Center for Biotechnology Information (US). Colon Cancer. 1998. Available online: https://www.ncbi.nlm.nih.gov/books/NBK22218/ (accessed on 15 January 2021).
  8. Burt, R.; Neklason, D.W. Genetic Testing for Inherited Colon Cancer. Gastroenterology 2005, 128, 1696–1716. [Google Scholar] [CrossRef] [PubMed]
  9. Gu, Q.; Li, Z.; Han, J. Generalized fisher score for feature selection. arXiv 2012, arXiv:1202.3725. [Google Scholar]
  10. Coates, A.; Ng, A.Y. Learning Feature Representations with K-Means. In Pattern Recognition. ICPR International Workshops and Challenges; Springer: New York, NY, USA, 2012; pp. 561–580. [Google Scholar]
  11. Lee, K.S.; Geem, Z.W. A new meta-heuristic algorithm for continuous engineering optimization: Harmony search theory and practice. Comput. Methods Appl. Mech. Eng. 2005, 194, 3902–3933. [Google Scholar] [CrossRef]
  12. Bertoni, A.; Folgieri, R.; Valentini, G. Bio-molecular cancer prediction with random subspace ensembles of support vector machines. Neurocomputing 2005, 63, 535–539. [Google Scholar] [CrossRef] [Green Version]
  13. Valentini, G.; Masulli, F. NEURObjects: An object-oriented library for neural network development. Neurocomputing 2002, 48, 623–646. [Google Scholar] [CrossRef] [Green Version]
  14. Marvi-Khorasani, H.; Usefi, H. Feature Clustering Towards Gene Selection. In Proceedings of the 2019 18th IEEE International Conference on Machine Learning And Applications (ICMLA), Boca Raton, FL, USA, 16–19 December 2019; pp. 1466–1469. [Google Scholar]
  15. Princeton University Gene Expression Project. Available online: http://microarray.princeton.edu/oncology/ (accessed on 15 January 2021).
  16. Misquitta, C.M.; Iyer, V.R.; Werstiuk, E.S.; Grover, A.K. The role of 3′-untranslated region (3′-UTR) mediated mRNA stability in cardiovascular pathophysiology. Mol. Cell. Biochem. 2001, 224, 53–67. [Google Scholar] [CrossRef] [PubMed]
  17. Cheadle, C.; Vawter, M.P.; Freed, W.J.; Becker, K.G. Analysis of Microarray Data Using Z Score Transformation. J. Mol. Diagn. 2003, 5, 73–81. [Google Scholar] [CrossRef] [Green Version]
  18. Bry, X.; Trottier, C.; Verron, T.; Mortier, F. Supervised component generalized linear regression using a PLS-extension of the Fisher scoring algorithm. J. Multivar. Anal. 2013, 119, 47–60. [Google Scholar] [CrossRef] [Green Version]
  19. Hickmann, K.S.; Fairchild, G.; Priedhorsky, R.; Generous, N.; Hyman, J.M.; Deshpande, A.; Del Valle, S.Y. Forecasting the 2013–2014 Influenza Season Using Wikipedia. Plos Comput. Biol. 2015, 11, e1004239. [Google Scholar] [CrossRef] [Green Version]
  20. Li, X.; Wang, B.; Lv, H.; Yin, Q.; Zhang, Q.; Wei, X. Constraining DNA Sequences with a Triplet-Bases Unpaired. IEEE Trans. Nanobioscience 2020, 19, 299–307. [Google Scholar] [CrossRef] [PubMed]
  21. Shin, S.-Y.; Lee, I.-H.; Kim, D.; Zhang, B.-T. Multiobjective Evolutionary Optimization of DNA Sequences for Reliable DNA Computing. IEEE Trans. Evol. Comput. 2005, 9, 143–158. [Google Scholar] [CrossRef] [Green Version]
  22. Chaves-González, J.M.; Vega-Rodríguez, M.A. DNA strand generation for DNA computing by using a multi-objective differential evolution algorithm. Biosystems 2014, 116, 49–64. [Google Scholar] [CrossRef]
  23. Chaves-González, J.M. Hybrid multiobjective metaheuristics for the design of reliable DNA libraries. J. Heuristics 2015, 21, 751–788. [Google Scholar] [CrossRef]
  24. Chaves-González, J.M.; Vega-Rodríguez, M.A.; Granado-Criado, J.M. A multiobjective swarm intelligence approach based on artificial bee colony for reliable DNA sequence design. Eng. Appl. Artif. Intell. 2013, 26, 2045–2057. [Google Scholar] [CrossRef]
  25. Elyasigomari, V.; Lee, D.; Screen, H.; Shaheed, M. Development of a two-stage gene selection method that incorporates a novel hybrid approach using the cuckoo optimization algorithm and harmony search for cancer classification. J. Biomed. Inform. 2017, 67, 11–20. [Google Scholar] [CrossRef]
  26. Mohsen, A.M.; Khader, A.T.; Ramachandram, D. HSRNAFold: A harmony search algorithm for RNA secondary structure prediction based on minimum free energy. In Proceedings of the 2008 International Conference on Innovations in Information Technology, Al Ain, United Arab Emirates, 16–18 December 2008; pp. 11–15. [Google Scholar]
  27. Faraji, B.; Esfahani, Z.; Rouhollahi, K.; Khezri, D. Optimal Canceling of the Physiological Tremor for Rehabilitation in Parkinson’s disease. J. Exerc. Sci. Med. 2020, 11. [Google Scholar]
  28. Jain, A.K.; Mao, J.; Mohiuddin, M. Neural networks: A tutorial. IEEE Comput. 1996, 29, 31–44. [Google Scholar] [CrossRef] [Green Version]
  29. Elad, A.; Kimmel, R. On bending invariant signatures for surfaces. IEEE Trans. Pattern Anal. Mach. Intell. 2003, 25, 1285–1295. [Google Scholar] [CrossRef]
  30. Grasso, S.; Tristante, E.; Saceda, M.; Carbonell, P.; Mayor-López, L.; Carballo-Santana, M.; Martínez-Lacaci, I. Resistance to Selumetinib (AZD6244) in colorectal cancer cell lines is mediated by p70S6K and RPS6 activation. Neoplasia 2014, 16, 845–860. [Google Scholar] [CrossRef] [Green Version]
  31. Yang, Y.C.; Chien, M.H.; Lai, T.C.; Su, C.Y.; Jan, Y.H.; Hsiao, M.; Chen, C.L. Monoamine Oxidase B Expression Correlates with a Poor Prognosis in Colorectal Cancer Patients and Is Significantly Associated with Epitheli-al-to-Mesenchymal Transition-Related Gene Signatures. Int. J. Mol. Sci. 2020, 21, 2813. [Google Scholar] [CrossRef] [PubMed]
  32. Yang, Q.; Roehrl, M.H.; Wang, J.Y. Proteomic profiling of antibody-inducing immunogens in tumor tissue identifies PSMA1, LAP3, ANXA3, and maspin as colon cancer markers. Oncotarget 2018, 9, 3996–4019. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  33. Alves Martins, B.A.; De Bulhões, G.F.; Cavalcanti, I.N.; Martins, M.M.; de Oliveira, P.G.; Martins, A.M.A. Biomarkers in colorectal cancer: The role of translational proteomics research. Front. Oncol. 2019, 9, 1284. [Google Scholar] [CrossRef] [PubMed]
  34. Huang, R.; Wu, T.; Xu, L.; Liu, A.; Ji, Y.; Hu, G. Upstream binding factor up-regulated in hepatocellular carcinoma is related to the survival and cisplatin-sensitivity of cancer cells. FASEB J. 2002, 16, 293–301. [Google Scholar] [CrossRef] [PubMed]
  35. Korman, L.Y.; Sayadi, H.; Bass, B.; Moody, T.W.; Harmon, J.W. Distribution of vasoactive intestinal polypeptide and substance P receptors in human colon and small intestine. Dig. Dis. Sci. 1989, 34, 1100–1108. [Google Scholar] [CrossRef]
  36. Wong, T.-S.; Gao, W.; Chan, J.Y.-W. Transcription Regulation of E-Cadherin by Zinc Finger E-Box Binding Homeobox Proteins in Solid Tumors. BioMed Res. Int. 2014, 2014, 1–10. [Google Scholar] [CrossRef] [PubMed]
  37. Diaz-Uriarte, R.; De Andrés, S.A. Gene selection and classification of microarray data using random forest. BMC Bioinform. 2006, 7, 3. [Google Scholar] [CrossRef] [Green Version]
  38. Alon, U.; Barkai, N.; Notterman, D.A.; Gish, K.; Ybarra, S.; Mack, D.; Levine, A.J. Broad patterns of gene expression revealed by clustering analysis of tumor and normal colon tissues probed by oligonucleotide arrays. Proc. Natl. Acad. Sci. USA 1999, 96, 6745–6750. [Google Scholar] [CrossRef] [Green Version]
  39. Dettling, M.; Bühlmann, P. Boosting for tumor classification with gene expression data. Bioinformatics 2003, 19, 1061–1069. [Google Scholar] [CrossRef] [PubMed]
  40. Xie, J.; Wang, Y.; Wu, Z. Colon cancer data analysis by chameleon algorithm. Health Inf. Sci. Syst. 2019, 7, 1–8. [Google Scholar] [CrossRef] [PubMed]
  41. Ma, S.; Song, X.; Huang, J. Supervised group Lasso with applications to microarray data analysis. BMC Bioinform. 2007, 8, 60. [Google Scholar] [CrossRef] [Green Version]
  42. Hedenfalk, I.; Duggan, D.; Chen, Y.; Radmacher, M.; Bittner, M.; Simon, R.; Trent, J. Gene-expression profiles in he-reditary breast cancer. N. Eng. J. Med. 2001, 344, 539–548. [Google Scholar] [CrossRef]
  43. Prat, J.; Ribé, A.; Gallardo, A. Hereditary ovarian cancer. Hum. Pathol. 2005, 36, 861–870. [Google Scholar] [CrossRef] [PubMed]
  44. Thapa, S.; Lomholt, M.A.; Krog, J.; Cherstvy, A.G.; Metzler, R. Bayesian analysis of single-particle tracking data using the nested-sampling algorithm: Maximum-likelihood model selection applied to stochastic-diffusivity data. Phys. Chem. Chem. Phys. 2018, 20, 29018–29037. [Google Scholar] [CrossRef]
  45. Muñoz-Gil, G.; Garcia-March, M.A.; Manzo, C.; Martín-Guerrero, J.D.; Lewenstein, M. Single trajectory characterization via machine learning. New J. Phys. 2019, 22, 013010. [Google Scholar] [CrossRef]
Figure 1. Scheme of proposed methods.
Figure 1. Scheme of proposed methods.
Mathematics 09 00570 g001
Figure 2. Divided harmony memory.
Figure 2. Divided harmony memory.
Mathematics 09 00570 g002
Figure 3. Elimination of two worst harmonies.
Figure 3. Elimination of two worst harmonies.
Mathematics 09 00570 g003
Figure 4. Structure of ANN (artificial neural network).
Figure 4. Structure of ANN (artificial neural network).
Mathematics 09 00570 g004
Figure 5. 5-fold cross-validation data segmentation.
Figure 5. 5-fold cross-validation data segmentation.
Mathematics 09 00570 g005
Figure 6. Inertia value according to the number of clusters.
Figure 6. Inertia value according to the number of clusters.
Mathematics 09 00570 g006
Table 1. Z-normalized values of 12 patients corresponding to Attribute 1.
Table 1. Z-normalized values of 12 patients corresponding to Attribute 1.
Patient NumberAttribute 1After Z-Normalization
18589.41630.508776126
29164.25370.694628976
33825.705−1.031397365
46246.4487−0.248737577
53230.3287−1.223890725
62510.325−1.45667784
574653.2375−0.763844707
584972.1662−0.660730665
599112.37250.67785507
606730.625−0.092196709
616234.6225−0.252561151
627472.010.147503275
Table 2. 20 representative genes used in modified harmony search expression levels.
Table 2. 20 representative genes used in modified harmony search expression levels.
35745775072216359829361897151531610691170158737640482109980431244AA
1−0.19920.50710.1017−0.0684−0.6675−0.0300−0.5490−0.66260.1372−0.0531−0.3638−0.6533−0.5021−0.3102−0.2184−0.8842−0.84170.4786−0.2234−0.26270
2−0.76660.43990.57560.37170.99670.4240−0.25090.4489−0.0195−0.64261.8825−0.3336−0.11461.27920.3798−0.44300.10980.7575−0.72580.00561
33.0568−0.11591.2351−0.5044−0.8140−0.9917−0.7655−0.8108−1.2720−1.28960.7242−1.1906−0.5315−0.8249−0.8806−1.1246−1.16430.2576−1.3183−1.08840
41.2695−0.09030.8739−0.2384−0.5939−0.8132−0.7175−0.5897−0.5017−0.9306−0.3337−0.9889−0.69940.1333−0.4219−0.8763−1.12930.3639−1.0305−0.79141
5−0.0648−0.3319−0.6814−0.8450−0.6507−0.9795−0.5607−0.4224−1.1695−0.6220−0.3618−0.5248−0.3469−1.0208−0.3840−0.3813−0.35770.35200.1902−0.63800
6−0.2609−0.4963−1.1186−1.2516−0.2586−1.0143−0.76580.0749−1.2676−0.6466−0.7588−0.9889−0.5115−0.7993−0.9140−0.7159−1.1862−0.0364−0.9349−0.62441
7−0.67300.9887−0.1339−0.0290−0.7465−0.3074−0.3726−0.6688−0.8856−0.6814−0.8619−0.8984−0.3543−1.0381−0.2699−0.5531−0.5336−0.0188−0.5576−0.73220
8−0.6308−0.1363−0.8422−0.4778−0.3815−0.7124−0.3125−0.6811−1.2536−0.3774−0.3012−1.0148−0.1874−0.3649−0.6253−0.4677−1.05931.0362−1.0438−0.36381
9−1.04752.24751.63910.1700−0.87090.55760.8973−0.55811.47640.8136−0.5818−0.8986−0.2457−0.33600.91010.9838−1.05310.21951.7120−1.67390
10−0.25500.3901−0.1093−0.48330.5957−0.7036−0.53060.5286−1.0627−0.5627−0.38060.11060.9938−0.5130−0.1034−0.27000.53690.2222−0.51020.41711
110.46433.1825−0.37542.55360.33933.09580.8291−0.2470−0.78391.0607−0.12223.41123.1890−0.82445.50612.06063.05061.60471.6041−0.08070
121.06350.81520.81340.79390.40410.7287−0.33380.69670.27580.00281.3968−0.27610.25021.46740.5655−0.31760.10290.04660.0120−0.36801
13−0.3604−0.3781−0.5081−0.0171−0.2191−0.68800.0041−0.3768−0.9954−0.2205−0.5420−0.2176−0.5044−0.7372−0.5431−0.40080.21320.3524−0.0848−0.24780
140.6732−0.6340−0.5889−0.7660−0.0504−0.9001−0.5282−0.1615−0.7677−0.4724−0.5403−0.8730−0.8915−0.5668−0.7117−0.7386−0.44620.0252−0.4674−0.54181
150.1503−0.4892−0.0644−0.6390−0.4183−0.6987−0.1476−0.4740−0.39080.3067−0.1940−0.4529−0.2461−0.2099−0.6133−0.1809−0.33712.82370.45980.32340
160.1676−0.7636−0.6108−1.2205−0.5852−1.1287−0.2759−0.8298−0.88910.1060−0.8727−0.8725−0.8945−1.0600−0.9146−0.3240−0.03260.41680.0068−1.12811
17−0.5718−0.8460−1.2704−0.3834−0.8923−0.3836−0.5605−0.6439−1.2231−0.7155−0.8652−0.9879−1.1323−0.8553−0.4576−0.6124−0.9954−1.02130.5998−0.97270
18−0.9664−0.6742−1.2115−0.7833−0.0885−0.4719−0.8443−0.5524−1.2840−1.1437−0.9806−1.0695−1.2193−0.9612−0.4430−0.8493−1.1887−0.3629−1.0256−0.65101
19−1.0436−0.2089−0.6123−0.2747−0.9536−0.6383−0.2481−0.6895−0.7374−0.1340−0.7427−0.7266−0.9080−0.8854−0.0453−0.1587−1.2486−0.34000.2375−0.91340
20−0.25270.1530−1.2221−0.87150.4357−0.9488−0.57320.0302−1.3058−0.5522−0.7981−0.6371−0.1376−0.9913−0.6748−0.6001−0.9203−0.0483−0.3883−0.18061
21−0.92181.02380.18570.1325−0.50600.6150−0.0497−0.65400.1856−1.4646−0.74670.14200.1219−0.2450−0.0476−0.2655−0.19100.7110−0.28190.59220
221.19150.54210.62840.22801.42420.2572−0.64870.19010.1229−0.48060.24460.25410.43710.52560.0108−0.80160.48051.4955−0.79111.92971
23−0.9997−1.2956−1.3319−0.9966−0.7991−1.0448−0.1708−0.6968−1.19300.0079−0.9208−0.8099−0.7786−1.0369−0.53000.1939−0.15100.26020.2545−0.68400
24−1.1251−1.3200−1.5176−1.5869−0.5999−1.2172−1.0349−0.8018−1.3932−1.3848−1.0472−1.1924−1.2572−1.0503−0.9622−1.0551−1.2676−1.0237−1.4197−1.25311
252.1527−0.37990.41280.0581−0.6111−0.2685−0.0154−0.52310.55920.7240−0.7720−0.26150.0890−0.5596−0.32970.53330.3943−0.83991.5212−0.04600
26−0.8154−0.3332−0.5275−0.2076−0.7800−0.6859−0.7967−0.69350.0172−0.8118−0.73270.0515−0.1746−0.8941−0.5018−0.67010.5713−1.2620−0.8807−0.90110
27−0.7735−0.6015−0.6106−0.5381−0.4543−0.2415−0.7473−0.58190.0240−0.7150−0.6884−0.0911−0.1966−0.7717−0.5844−0.69310.1300−1.3225−0.7485−0.87370
28−0.91090.51731.09750.5547−0.69700.26490.2318−0.55222.16601.0763−0.03860.31830.77180.43140.48050.58931.3612−0.52060.9256−0.20980
29−0.57212.75453.12741.9689−0.00941.1863−0.0478−0.18912.99060.41990.35712.05843.0102−0.14711.0052−0.13872.7865−0.6162−0.10870.45740
30−0.91611.66001.19542.3194−0.49401.79430.4048−0.31101.47130.4678−0.32491.04371.83330.13170.95020.38602.1646−0.35480.6085−0.08980
31−1.11650.88261.86290.2982−0.35990.26750.35710.39012.50010.95784.19380.27541.21051.0568−0.09280.24850.4813−0.81990.32560.54390
32−0.9831−0.1959−1.3571−0.41430.41540.5795−0.50730.1110−0.9439−0.7458−0.48050.1408−0.2808−1.0085−0.2011−0.2811−0.3807−1.1552−0.5167−0.87090
33−0.4961−0.0128−1.1792−0.2391−0.5376−0.3355−0.3795−0.6084−0.9436−0.1182−1.0148−0.3036−0.0893−1.0762−0.43750.3608−0.3553−1.28580.4535−0.46510
340.98350.13540.85430.1376−0.44180.03550.7054−0.20090.88131.00630.3341−0.17280.43610.3466−0.19040.14230.3706−0.48960.3484−0.39740
35−0.4833−0.3394−0.6218−0.1287−0.67900.1834−0.3002−0.5671−0.1395−0.4042−0.2245−0.3249−0.6001−0.8524−0.2944−0.3355−1.0430−1.0594−0.8860−0.80070
36−0.7281−0.8700−1.0871−0.8550−0.81650.09690.5636−0.5480−0.11810.3411−0.6805−0.2281−0.8138−0.8964−0.60670.6057−0.4189−0.84950.5910−1.14400
37−0.33993.09092.73140.6380−0.2966−0.2066−0.4223−0.74120.7620−0.6222−0.19750.17393.1760−0.73500.3958−0.51450.1316−0.5937−0.5462−0.05620
38−1.1911−0.9283−1.0429−1.0896−0.8135−0.61320.2649−0.7337−0.19260.4481−0.88650.0863−0.7292−1.0073−0.68580.50260.2004−1.08831.3413−1.11430
39−0.3071−1.24740.0860−1.09481.0319−0.8550−0.51130.4325−0.1901−0.49620.3330−0.4471−0.52491.0173−0.7199−0.5768−0.2850−0.7477−0.79280.44431
400.2579−0.6935−0.30560.8062−0.53880.4405−0.2563−0.11710.44000.0777−0.31320.0615−0.4472−0.12930.3970−0.29030.0332−0.56780.7060−0.17250
41−0.6221−1.2147−0.2310−0.9927−0.8238−0.7229−0.5687−0.65510.2961−0.7430−0.0772−0.9459−1.1104−0.4439−0.3033−0.6929−1.0863−1.1514−0.7252−0.93790
420.32460.1301−0.03650.75361.1815−0.16300.24930.8735−0.31920.2676−0.1930−0.4060−0.24700.74160.3565−0.0454−0.7271−0.2528−0.63291.21291
431.96950.90770.84362.20592.50411.15651.25235.03530.31980.64502.05931.55010.75832.04330.03961.74930.77790.72930.28502.09971
44−0.62861.02610.30362.5481−0.07143.82472.34600.24760.77101.86830.17062.30220.3095−0.19482.75122.07571.75300.94901.95870.69090
451.00090.78652.40111.56173.08450.81110.51971.27891.87711.12890.62362.53531.83333.43741.21300.44872.2789−0.28950.05933.55730
462.13430.75790.01622.67740.11632.11453.20650.57591.05343.2313−0.41361.30260.84830.36531.90493.13420.7206−0.29642.78391.51640
470.79650.32970.71660.43640.06770.83264.9333−0.02581.97362.0418−0.32032.97191.54550.56131.34454.56031.38772.12832.28652.01610
48−0.2596−0.33610.4716−0.13494.0534−0.19280.01671.39180.4278−0.22431.52000.89051.20191.40250.08350.03830.5237−0.1815−1.00541.60311
49−0.0226−0.4900−0.4330−0.31721.43280.0199−0.64440.1271−0.5104−0.9655−0.05330.3951−0.35410.91830.1715−0.7495−0.5247−0.9193−1.06240.26040
500.2691−0.7205−0.20080.29541.0927−1.0150−0.38823.2096−0.18760.61010.84240.0779−0.40901.3963−0.1282−0.04921.05131.5847−0.07300.80971
51−1.0684−0.6466−0.6541−0.6908−0.18360.3447−0.5880−0.53160.4261−0.2238−0.7169−0.1310−0.0254−0.4201−0.5104−0.3398−0.07211.8853−0.10811.65901
520.3214−0.04990.14441.4566−0.34492.21772.11680.08881.06173.9229−0.61090.4205−0.00790.45110.63791.60551.05413.24483.23240.61170
53−0.6844−0.8219−0.0865−0.7453−0.5648−0.0203−0.0723−0.45160.36640.32970.2201−0.4875−0.5796−0.2607−0.4161−0.4218−0.4390−0.39090.5963−0.02090
54−0.2355−0.64620.6279−0.37690.6364−0.71390.06741.76790.65810.31001.5562−0.21930.16562.0470−0.44470.0956−0.74350.4457−0.17011.70741
552.1278−0.4153−0.3230−0.1529−0.5469−0.2273−0.4385−0.1479−0.3919−0.05560.19970.0787−0.3176−0.1840−0.2261−0.4615−0.2847−0.8512−0.6685−0.22321
560.2032−0.4662−0.6407−0.55040.8009−0.6042−0.07971.3601−0.4682−0.42080.5604−0.3195−0.43530.4957−0.7274−0.1458−0.4557−0.6586−0.5978−0.10990
572.0444−0.85840.8658−0.9170−0.7307−0.6747−0.7436−0.2323−0.0305−0.45003.1704−0.4119−0.60791.3337−0.8944−0.6458−0.62730.9960−0.7052−0.30020
58−0.6652−0.9762−0.4155−0.7555−0.9611−0.8140−0.7491−0.81220.0495−1.10100.0414−0.8919−1.0581−0.5954−0.5056−0.7965−0.8875−1.2378−0.6453−1.11030
59−0.3915−0.6791−0.6367−0.6560−0.37470.34040.4901−0.1944−0.50810.3228−0.38090.6629−0.59260.1046−0.56000.35251.1441−0.10590.70940.47790
60−0.0889−0.5860−0.1552−0.20931.59650.1178−0.64040.47500.2935−1.10460.89020.6335−0.09281.81180.3945−0.3636−0.1894−0.2347−1.07480.52741
610.2936−0.5280−0.8700−0.2632−0.60200.0362−0.0931−0.63430.1302−0.6866−0.5733−0.4376−0.7027−0.3033−0.1950−0.3600−0.6969−0.7171−0.5882−0.52320
621.5230−0.4513−0.1981−0.19990.6194−0.3169−0.22571.2388−0.3746−0.23210.6893−0.2619−0.32171.6116−0.1817−0.0748−0.48940.3250−0.46970.09951
Table 3. Performance comparison of various algorithms.
Table 3. Performance comparison of various algorithms.
MethodNumber of GenesAccuracy (%)
LogitBoot200085.48
Random Forest200084.10
SVM200082.26
Two-way clustering200087.10
Chameleon algorithm585.48
Supervised group Lasso2285.48
Z-FS-KM-MHS (our method)893.46
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Bae, J.H.; Kim, M.; Lim, J.S.; Geem, Z.W. Feature Selection for Colon Cancer Detection Using K-Means Clustering and Modified Harmony Search Algorithm. Mathematics 2021, 9, 570. https://doi.org/10.3390/math9050570

AMA Style

Bae JH, Kim M, Lim JS, Geem ZW. Feature Selection for Colon Cancer Detection Using K-Means Clustering and Modified Harmony Search Algorithm. Mathematics. 2021; 9(5):570. https://doi.org/10.3390/math9050570

Chicago/Turabian Style

Bae, Jin Hee, Minwoo Kim, J.S. Lim, and Zong Woo Geem. 2021. "Feature Selection for Colon Cancer Detection Using K-Means Clustering and Modified Harmony Search Algorithm" Mathematics 9, no. 5: 570. https://doi.org/10.3390/math9050570

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop