# An Efficient Cellular Automata-Based Classifier with Variance Decision Table

^{*}

## Abstract

**:**

## 1. Introduction

## 2. Related Work

#### 2.1. Cellular Automata for Pattern Recognition

#### 2.2. An Enhanced Butterfly Optimization Algorithm (m-MBOA)

Algorithm 1: m-MBOA. |

## 3. Proposed Method

**Definition 1**

**Lemma 1.**

**Proof of Lemma 1.**

**Definition 2**

**Lemma 2.**

**Proof of Lemma 2.**

**Lemma 3.**

**Proof of Lemma 3.**

**Definition 3**

**Definition 4**

**Lemma 4.**

**Proof of Lemma 4.**

**Definition 5**

**Theorem 1.**

#### 3.1. Proposed Method

Algorithm 2: CAV labeled. |

#### 3.1.1. Initial Values of Rule Matrices

#### 3.1.2. Create Variance Decision Table

#### 3.1.3. m-MBOA Process

#### 3.1.4. Rule Matrices Synthesis

## 4. Experimentals

#### 4.1. Experimental Setup

#### 4.2. Datasets

#### 4.3. Classifier Performance Evaluation Method

## 5. Results and Discussion

## 6. Conclusions

## Author Contributions

## Funding

## Data Availability Statement

## Acknowledgments

## Conflicts of Interest

## References

- Chauhan, V.K.; Dahiya, K.; Sharma, A. Problem formulations and solvers in linear SVM: A review. Artif. Intell. Rev.
**2019**, 52, 803–855. [Google Scholar] [CrossRef] - Wang, S.; Tao, D.; Yang, J. Relative Attribute SVM+ Learning for Age Estimation. IEEE Trans. Cybern.
**2016**, 46, 827–839. [Google Scholar] [CrossRef] [PubMed] - Ameh Joseph, A.; Abdullahi, M.; Junaidu, S.B.; Hassan Ibrahim, H.; Chiroma, H. Improved multi-classification of breast cancer histopathological images using handcrafted features and deep neural network (dense layer). Intell. Syst. Appl.
**2022**, 14, 200066. [Google Scholar] [CrossRef] - Hirsch, L.; Katz, G. Multi-objective pruning of dense neural networks using deep reinforcement learning. Inf. Sci.
**2022**, 610, 381–400. [Google Scholar] [CrossRef] - Lee, K.B.; Shin, H.S. An Application of a Deep Learning Algorithm for Automatic Detection of Unexpected Accidents Under Bad CCTV Monitoring Conditions in Tunnels. In Proceedings of the 2019 International Conference on Deep Learning and Machine Learning in Emerging Applications (Deep-ML), Istanbul, Turkey, 26–28 August 2019; pp. 7–11. [Google Scholar] [CrossRef][Green Version]
- Roy, S.; Menapace, W.; Oei, S.; Luijten, B.; Fini, E.; Saltori, C.; Huijben, I.; Chennakeshava, N.; Mento, F.; Sentelli, A.; et al. Deep Learning for Classification and Localization of COVID-19 Markers in Point-of-Care Lung Ultrasound. IEEE Trans. Med Imaging
**2020**, 39, 2676–2687. [Google Scholar] [CrossRef] - Bai, X.; Wang, X.; Liu, X.; Liu, Q.; Song, J.; Sebe, N.; Kim, B. Explainable deep learning for efficient and robust pattern recognition: A survey of recent developments. Pattern Recognit.
**2021**, 120, 108102. [Google Scholar] [CrossRef] - Jang, S.; Jang, Y.E.; Kim, Y.J.; Yu, H. Input initialization for inversion of neural networks using k-nearest neighbor approach. Inf. Sci.
**2020**, 519, 229–242. [Google Scholar] - González, S.; García, S.; Li, S.T.; John, R.; Herrera, F. Fuzzy k-nearest neighbors with monotonicity constraints: Moving towards the robustness of monotonic noise. Neurocomputing
**2021**, 439, 106–121. [Google Scholar] [CrossRef] - Tran, T.M.; Le, X.M.T.; Nguyen, H.T.; Huynh, V.N. A novel non-parametric method for time series classification based on k-Nearest Neighbors and Dynamic Time Warping Barycenter Averaging. Eng. Appl. Artif. Intell.
**2019**, 78, 173–185. [Google Scholar] [CrossRef] - Sun, N.; Sun, B.; Lin, J.D.; Wu, M.Y.C. Lossless Pruned Naive Bayes for Big Data Classifications. Big Data Res.
**2018**, 14, 27–36. [Google Scholar] [CrossRef] - Zhen, R.; Jin, Y.; Hu, Q.; Shao, Z.; Nikitakos, N. Maritime Anomaly Detection within Coastal Waters Based on Vessel Trajectory Clustering and Naïve Bayes Classifier. J. Navig.
**2017**, 70, 648–670. [Google Scholar] [CrossRef] - Ruan, S.; Chen, B.; Song, K.; Li, H. Weighted naïve Bayes text classification algorithm based on improved distance correlation coefficient. Neural Comput. Appl.
**2021**, 34, 2729–2738. [Google Scholar] [CrossRef] - Tufail, A.B.; Anwar, N.; Othman, M.T.B.; Ullah, I.; Khan, R.A.; Ma, Y.K.; Adhikari, D.; Rehman, A.U.; Shafiq, M.; Hamam, H. Early-Stage Alzheimers Disease Categorization Using PET Neuroimaging Modality and Convolutional Neural Networks in the 2D and 3D Domains. Sensors
**2022**, 22, 4609. [Google Scholar] [CrossRef] [PubMed] - Tufail, A.B.; Ma, Y.K.; Zhang, Q.N. Binary Classification of Alzheimer’s Disease Using sMRI Imaging Modality and Deep Learning. J. Digit. Imaging
**2020**, 33, 1073–1090. [Google Scholar] [CrossRef] - Tufail, A.B.; Ullah, I.; Rehman, A.U.; Khan, R.A.; Khan, M.A.; Ma, Y.K.; Hussain Khokhar, N.; Sadiq, M.T.; Khan, R.; Shafiq, M.; et al. On Disharmony in Batch Normalization and Dropout Methods for Early Categorization of Alzheimers Disease. Sustainability
**2022**, 14, 14695. [Google Scholar] [CrossRef] - Tufail, A.B.; Ma, Y.K.; Kaabar, M.K.A.; Rehman, A.U.; Khan, R.; Cheikhrouhou, O. Classification of Initial Stages of Alzheimers Disease through Pet Neuroimaging Modality and Deep Learning: Quantifying the Impact of Image Filtering Approaches. Mathematics
**2021**, 9, 3101. [Google Scholar] [CrossRef] - Qadir, F.; Gani, G. Correction to: Cellular automata-based digital image scrambling under JPEG compression attack. Multimed. Syst.
**2021**, 27, 1025–1034. [Google Scholar] [CrossRef] - Poonkuntran, S.; Alli, P.; Ganesan, T.M.S.; Moorthi, S.M.; Oza, M.P. Satellite Image Classification Using Cellular Automata. Int. J. Image Graph.
**2021**, 21, 2150014. [Google Scholar] [CrossRef] - Espínola, M.; Piedra-Fernández, J.A.; Ayala, R.; Iribarne, L.; Wang, J.Z. Contextual and Hierarchical Classification of Satellite Images Based on Cellular Automata. IEEE Trans. Geosci. Remote. Sens.
**2015**, 53, 795–809. [Google Scholar] [CrossRef] - Roy, S.; Shrivastava, M.; Pandey, C.V.; Nayak, S.K.; Rawat, U. IEVCA: An efficient image encryption technique for IoT applications using 2-D Von-Neumann cellular automata. Multimed. Tools Appl.
**2020**, 80, 31529–31567. [Google Scholar] [CrossRef] - Kumar, A.; Raghava, N.S. An efficient image encryption scheme using elementary cellular automata with novel permutation box. Multimed. Tools Appl.
**2021**, 80, 21727–21750. [Google Scholar] [CrossRef] - Florindo, J.B.; Metze, K. A cellular automata approach to local patterns for texture recognition. Expert Syst. Appl.
**2021**, 179, 115027. [Google Scholar] [CrossRef] - Da Silva, N.R.; Baetens, J.M.; da Silva Oliveira, M.W.; De Baets, B.; Bruno, O.M. Classification of cellular automata through texture analysis. Inf. Sci.
**2016**, 370–371, 33–49. [Google Scholar] [CrossRef] - Wongthanavasu, S.; Ponkaew, J. A cellular automata-based learning method for classification. Expert Syst. Appl.
**2016**, 49, 99–111. [Google Scholar] [CrossRef] - Ying, X. An Overview of Overfitting and its Solutions. J. Phys. Conf. Ser.
**2019**, 1168, 022022. [Google Scholar] [CrossRef] - Song, Y.; Wang, F.; Chen, X. An improved genetic algorithm for numerical function optimization. Appl. Intell.
**2019**, 49, 1880–1902. [Google Scholar] [CrossRef] - Kar, A.K. Bio inspired computing—A review of algorithms and scope of applications. Expert Syst. Appl.
**2016**, 59, 20–32. [Google Scholar] [CrossRef] - Zhang, B.; Yang, X.; Hu, B.; Liu, Z.; Li, Z. OEbBOA: A Novel Improved Binary Butterfly Optimization Approaches With Various Strategies for Feature Selection. IEEE Access
**2020**, 8, 67799–67812. [Google Scholar] [CrossRef] - Arora, S.; Anand, P. Binary butterfly optimization approaches for feature selection. Expert Syst. Appl.
**2019**, 116, 147–160. [Google Scholar] [CrossRef] - Sharma, S.; Saha, A.K. m-MBOA: A novel butterfly optimization algorithm enhanced with mutualism scheme. Soft Comput.
**2020**, 24, 4809–4827. [Google Scholar] [CrossRef] - Malik, Z.A.; Siddiqui, M. A study of classification algorithms usingRapidminer. Int. J. Pure Appl. Math.
**2018**, 119, 15977–15988. [Google Scholar] - Vanschoren, J.; van Rijn, J.N.; Bischl, B.; Torgo, L. OpenML: Networked Science in Machine Learning. SIGKDD Explor.
**2013**, 15, 49–60. [Google Scholar] [CrossRef][Green Version] - Maji, P.; Chaudhuri, P.P. Non-uniform cellular automata based associative memory: Evolutionary design and basins of attraction. Inf. Sci.
**2008**, 178, 2315–2336. [Google Scholar] [CrossRef] - Wolfram, S. A New Kind of Science; Wolfram Media: Champaign, IL, USA, 2002. [Google Scholar]
- Samraj, J.; Pavithra, R. Deep Learning Models of Melonoma Image Texture Pattern Recognition. In Proceedings of the 2021 IEEE International Conference on Mobile Networks and Wireless Communications (ICMNWC), Tumkur, India, 3–4 December 2021; pp. 1–6. [Google Scholar]
- Lobo, J.L.; Del Ser, J.; Herrera, F. LUNAR: Cellular automata for drifting data streams. Inf. Sci.
**2021**, 543, 467–487. [Google Scholar] [CrossRef] - Morán, A.; Frasser, C.F.; Roca, M.; Rosselló, J.L. Energy-Efficient Pattern Recognition Hardware With Elementary Cellular Automata. IEEE Trans. Comput.
**2020**, 69, 392–401. [Google Scholar] [CrossRef] - Arora, S.; Singh, S. Butterfly optimization algorithm: A novel approach for global optimization. Soft Comput.
**2019**, 23, 715–734. [Google Scholar] [CrossRef] - Li, G.; Shuang, F.; Zhao, P.; Le, C. An Improved Butterfly Optimization Algorithm for Engineering Design Problems Using the Cross-Entropy Method. Symmetry
**2019**, 11, 1049. [Google Scholar] [CrossRef][Green Version] - Mambou, E.N.; Swart, T.G. A Construction for Balancing Non-Binary Sequences Based on Gray Code Prefixes. IEEE Trans. Inf. Theory
**2018**, 64, 5961–5969. [Google Scholar] [CrossRef][Green Version] - Gutierres, G.; Mamede, R.; Santos, J.L. Gray codes for signed involutions. Discret. Math.
**2018**, 341, 2590–2601. [Google Scholar] [CrossRef] - Song, J.; Shen, P.; Wang, K.; Zhang, L.; Song, H. Can Gray Code Improve the Performance of Distributed Video Coding? IEEE Access
**2016**, 4, 4431–4441. [Google Scholar] [CrossRef] - Chang, C.H.; Su, J.Y. Reversibility of Linear Cellular Automata on Cayley Trees with Periodic Boundary Condition. Taiwan. J. Math.
**2017**, 21, 1335–1353. [Google Scholar] [CrossRef] - Uguz, S.; Acar, E.; Redjepov, S. Three States Hybrid Cellular Automata with Periodic Boundary Condition. Malays. J. Math. Sci.
**2018**, 12, 305–321. [Google Scholar] - LuValle, B.J. The Effects of Boundary Conditions on Cellular Automata. Complex Syst.
**2019**, 28, 97–124. [Google Scholar] [CrossRef] - Cinkir, Z.; Akin, H.; Siap, I. Reversibility of 1D Cellular Automata with Periodic Boundary over Finite Fields Z(p). J. Stat. Phys.
**2011**, 143, 807–823. [Google Scholar] [CrossRef] - Zhou, L.; Wang, Q.; Fujita, H. One versus one multi-class classification fusion using optimizing decision directed acyclic graph for predicting listing status of companies. Inf. Fusion
**2017**, 36, 80–89. [Google Scholar] [CrossRef]

**Figure 3.**The initial rule matrices process consists of three steps: (

**a**) converting the input data into binary data using a gray code encoder; (

**b**) creating initial rule matrices with periodic boundary conditions; and (

**c**) generating initial rule matrices for both positive and negative data.

**Figure 4.**The Create Variance Decision Table process. (

**a**) Initial rule matrix of positive data; (

**b**) initial rule matrix of negative data; and (

**c**) generating variance decision table.

**Figure 5.**The m-MBOA process. (

**a**) Variance decision table; (

**b**) Create vector V′ by de-duplicate and sort data; (

**c**) process m-MBOA algorithm to create the variation coefficient; and (

**d**) variation coefficient.

**Figure 14.**Illustration of the average Friedman rank for CAV, CAC, SVM, kNN, DNN-1, DNN-2, and Naïve Bayes. The green bar graph represents the value of the average Friedman rank in the proposed model, while the blue bar graph represents the corresponding value of the classifier used for comparison with the proposed model.

Notation | Description |
---|---|

$i,j$ | Number of datasets (binary data) features, number of possible ECA configuration $({2}^{3}=8)$ |

k | Number of samples from ${U}_{p}$ and ${U}_{n}$= 2 |

${U}_{p}\in {\mathbb{R}}^{i\times j}$ | Initial rule matrix of the data in positive class |

${U}_{n}\in {\mathbb{R}}^{i\times j}$ | Initial rule matrix of the data in negative class |

$V\in {\mathbb{R}}^{i\times j}$ | Variance Decision Table matrix |

${V}^{{}^{\prime}}$ | Variance Decision Table vector |

${W}_{p}\in {\mathbb{R}}^{i\times j}$ | Rule matrix of the positive class |

${W}_{n}\in {\mathbb{R}}^{i\times j}$ | Rule matrix of the negative class |

$\overline{x}$ | Mean of each ${\left[{U}_{p}\right]}_{ij}$ and ${\left[{U}_{n}\right]}_{ij}$ |

S | Standard deviation of each ${\left[{U}_{p}\right]}_{ij}$ and ${\left[{U}_{n}\right]}_{ij}$ |

$\%CV$ | Percentage coefficient of variance |

${\mathcal{D}}_{t}$ | Training data |

${\omega}_{v}$ | The variation coefficient |

Classifier | Parameters | Value |
---|---|---|

CAV | Population | 100 |

Probability switch (p) | 0.5 | |

Power exponent | 0.4 | |

Sensory modality | 0.03 | |

Max iteration | 20 | |

CAC | Sample | 100 |

Crossover probability | 48 | |

Mutation probability | 40 | |

Max iteration | 20 | |

SVM | SVM type | C-SVM |

Kernel type | Rbf | |

Class weight | 1.0 | |

kNN | K (neighbor) | 5 |

Weighted vote | Yes | |

DNN-1 | Activation | Rectifier |

Hidden layer number | 2 | |

Hidden layer size | 50 | |

DNN-2 | Activation | Maxout |

Hidden layer number | 3 | |

Hidden layer size | 100 | |

Naïve Bayes | Laplace correction | Yes |

Dataset | Class | Instances | Features |
---|---|---|---|

Soybean (small) | 4 | 47 | 35 |

SPECT heart | 2 | 80 | 23 |

Analcatdata boxing2 | 2 | 132 | 10 |

Tae | 3 | 151 | 6 |

Hayes-Roth | 3 | 160 | 5 |

Wisconsin | 2 | 194 | 33 |

Sonar | 2 | 208 | 61 |

Prnn synth | 2 | 256 | 3 |

Thyroid Disease | 2 | 306 | 3 |

Ecoli | 8 | 336 | 8 |

Congressional Voting | 2 | 435 | 16 |

Monk Problem 2 | 2 | 601 | 7 |

Breast w | 2 | 699 | 10 |

Pima Indians Diabetes | 2 | 768 | 8 |

Mammographic Mass | 2 | 961 | 6 |

**Table 4.**The accuracy of the proposed CAV compared with CAC, SVM, k-NN, DNN-1, DNN-2, and naïve Bayes (the bold number indicates the maximum value).

Dataset | Classification Accuracy (K = 10) | Average Improvement | ||||||
---|---|---|---|---|---|---|---|---|

CAV | CAC | SVM | kNN | DNN-1 | DNN-2 | Naïve Bayes | ||

Analcatdata boxing2 | 72.360 | 69.200 | 71.970 | 64.390 | 56.820 | 63.640 | 67.420 | |

Breast w | 97.420 | 96.850 | 96.850 | 97.000 | 96.570 | 96.570 | 95.990 | |

Congressional voting | 96.980 | 95.270 | 92.240 | 93.970 | 95.260 | 95.260 | 94.830 | |

Ecoli | 88.240 | 67.510 | 43.740 | 85.120 | 81.850 | 79.510 | 79.170 | |

Hayes-Roth | 78.950 | 77.250 | 43.750 | 66.250 | 58.750 | 66.870 | 70.000 | |

Mammographic mass | 82.640 | 82.430 | 80.480 | 79.520 | 82.170 | 81.330 | 80.600 | |

Monk’s problems 2 | 67.500 | 63.400 | 82.530 | 66.560 | 63.890 | 74.210 | 64.390 | |

Pima indian diabetes | 72.700 | 69.263 | 72.010 | 71.740 | 74.220 | 74.350 | 75.520 | |

Prnn synth | 83.279 | 78.704 | 84.400 | 81.600 | 83.200 | 82.800 | 84.400 | |

Sonar | 74.550 | 74.143 | 87.020 | 80.290 | 80.290 | 81.250 | 67.790 | |

Soybean small | 100.000 | 100.000 | 100.000 | 97.870 | 100.000 | 100.000 | 100.000 | |

SPECTheart | 75.000 | 62.500 | 68.750 | 63.750 | 71.250 | 71.250 | 68.750 | |

Tae | 59.040 | 57.536 | 47.020 | 47.680 | 46.360 | 41.720 | 51.660 | |

Thyroid disease | 93.960 | 94.892 | 80.930 | 95.350 | 93.490 | 93.490 | 96.740 | |

Wisconsin | 97.510 | 97.217 | 97.070 | 97.360 | 96.930 | 96.930 | 96.190 | |

Average | 82.675 | 79.078 | 76.584 | 79.230 | 78.737 | 79.945 | 79.563 | |

Improvement | 3.60 | 6.09 | 3.45 | 3.94 | 2.73 | 3.11 | 4.58 |

**Table 5.**The precision of the proposed CAV compared with CAC, SVM, k-NN, DNN-1, DNN-2 and naïve Bayes (the bold number indicates the maximum value).

Dataset | Classification Accuracy (K = 10) | Average Improvement | ||||||
---|---|---|---|---|---|---|---|---|

CAV | CAC | SVM | kNN | DNN-1 | DNN-2 | Naïve Bayes | ||

Analcatdata boxing2 | 0.720 | 0.750 | 0.873 | 0.747 | 0.521 | 0.648 | 0.761 | |

Breast w | 0.993 | 0.994 | 0.972 | 0.974 | 0.965 | 0.965 | 0.952 | |

Congressional voting | 0.983 | 0.977 | 0.887 | 0.927 | 0.944 | 0.960 | 0.960 | |

Ecoli | 0.880 | 0.723 | 0.088 | 0.558 | 0.595 | 0.739 | 0.547 | |

Hayes-Roth | 0.835 | 0.830 | 0.366 | 0.637 | 0.631 | 0.693 | 0.751 | |

Mammographic mass | 0.815 | 0.803 | 0.873 | 0.814 | 0.772 | 0.757 | 0.811 | |

Monk’s problems 2 | 0.889 | 0.727 | 0.779 | 1.000 | 0.527 | 0.676 | 0.980 | |

Pima indian diabetes | 0.809 | 0.846 | 0.466 | 0.519 | 0.474 | 0.489 | 0.605 | |

Prnn synth | 0.854 | 0.789 | 0.832 | 0.920 | 0.776 | 0.760 | 0.848 | |

Sonar | 0.744 | 0.786 | 0.814 | 0.722 | 0.773 | 0.835 | 0.804 | |

Soybean small | 1.000 | 1.000 | 1.000 | 0.975 | 1.000 | 1.000 | 1.000 | |

SPECTheart | 0.815 | 0.583 | 0.450 | 0.450 | 0.700 | 0.700 | 0.450 | |

Tae | 0.629 | 0.622 | 0.468 | 0.475 | 0.461 | 0.410 | 0.518 | |

Thyroid disease | 0.927 | 0.940 | 0.580 | 0.910 | 0.863 | 0.911 | 0.965 | |

Wisconsin | 0.991 | 0.991 | 0.973 | 0.978 | 0.964 | 0.964 | 0.955 | |

Average | 0.859 | 0.824 | 0.695 | 0.774 | 0.731 | 0.767 | 0.794 | |

Improvement | 0.035 | 0.164 | 0.085 | 0.128 | 0.092 | 0.065 | 0.114 |

**Table 6.**The recall of the proposed CAV compared with CAC, SVM, k-NN, DNN-1, DNN-2 and naïve Bayes (the bold number indicates the maximum value).

Dataset | Classification Accuracy (K = 10) | Average Improvement | ||||||
---|---|---|---|---|---|---|---|---|

CAV | CAC | SVM | kNN | DNN-1 | DNN-2 | Naïve Bayes | ||

Analcatdata boxing2 | 0.631 | 0.500 | 0.689 | 0.646 | 0.617 | 0.667 | 0.675 | |

Breast w | 0.967 | 0.958 | 0.980 | 0.980 | 0.982 | 0.982 | 0.986 | |

Congressional voting | 0.960 | 0.935 | 0.965 | 0.958 | 0.967 | 0.952 | 0.944 | |

Ecoli | 0.835 | 0.700 | 0.200 | 0.551 | 0.571 | 0.755 | 0.563 | |

Hayes-Roth | 0.798 | 0.783 | 0.601 | 0.735 | 0.615 | 0.679 | 0.742 | |

Mammographic mass | 0.834 | 0.851 | 0.760 | 0.775 | 0.847 | 0.843 | 0.794 | |

Monk’s problems 2 | 0.400 | 0.400 | 0.849 | 0.663 | 0.874 | 0.908 | 0.653 | |

Pima indian diabetes | 0.760 | 0.646 | 0.635 | 0.612 | 0.690 | 0.686 | 0.664 | |

Prnn synth | 0.791 | 0.808 | 0.853 | 0.762 | 0.874 | 0.880 | 0.841 | |

Sonar | 0.819 | 0.728 | 0.898 | 0.833 | 0.798 | 0.779 | 0.619 | |

Soybean small | 1.000 | 1.000 | 1.000 | 0.986 | 1.000 | 1.000 | 1.000 | |

SPECTheart | 0.700 | 0.500 | 0.857 | 0.628 | 0.718 | 0.718 | 0.857 | |

Tae | 0.589 | 0.576 | 0.469 | 0.479 | 0.511 | 0.282 | 0.513 | |

Thyroid disease | 0.927 | 0.925 | 0.806 | 0.960 | 0.962 | 0.911 | 0.942 | |

Wisconsin | 0.971 | 0.966 | 0.982 | 0.982 | 0.989 | 0.989 | 0.986 | |

Average | 0.799 | 0.752 | 0.769 | 0.770 | 0.801 | 0.802 | 0.785 | |

Improvement | 0.047 | 0.029 | 0.029 | −0.002 | −0.003 | 0.014 | 0.023 |

**Table 7.**The F1 of the proposed CAV compared with CAC, SVM, k-NN, DNN-1, DNN-2 and naïve Bayes (the bold number indicates the maximum value).

Dataset | Classification Accuracy (K = 10) | Average Improvement | ||||||
---|---|---|---|---|---|---|---|---|

CAV | CAC | SVM | kNN | DNN-1 | DNN-2 | Naïve Bayes | ||

Analcatdata boxing2 | 0.662 | 0.600 | 0.770 | 0.693 | 0.565 | 0.657 | 0.715 | |

Breast w | 0.980 | 0.975 | 0.976 | 0.977 | 0.974 | 0.974 | 0.969 | |

Congressional voting | 0.971 | 0.971 | 0.924 | 0.943 | 0.955 | 0.956 | 0.953 | |

Ecoli | 0.850 | 0.921 | 0.122 | 0.553 | 0.574 | 0.738 | 0.544 | |

Hayes-Roth | 0.786 | 0.788 | 0.310 | 0.663 | 0.622 | 0.685 | 0.742 | |

Mammographic mass | 0.823 | 0.825 | 0.813 | 0.794 | 0.808 | 0.797 | 0.803 | |

Monk’s problems 2 | 0.552 | 0.516 | 0.812 | 0.797 | 0.657 | 0.775 | 0.783 | |

Pima indian diabetes | 0.784 | 0.732 | 0.538 | 0.562 | 0.562 | 0.571 | 0.633 | |

Prnn synth | 0.817 | 0.789 | 0.842 | 0.833 | 0.822 | 0.816 | 0.845 | |

Sonar | 0.773 | 0.748 | 0.854 | 0.774 | 0.785 | 0.806 | 0.700 | |

Soybean small | 1.000 | 1.000 | 1.000 | 0.980 | 1.000 | 1.000 | 1.000 | |

SPECTheart | 0.730 | 0.529 | 0.590 | 0.651 | 0.709 | 0.709 | 0.590 | |

Tae | 0.569 | 0.551 | 0.464 | 0.474 | 0.445 | 0.327 | 0.501 | |

Thyroid disease | 0.914 | 0.927 | 0.600 | 0.932 | 0.971 | 0.911 | 0.953 | |

Wisconsin | 0.980 | 0.978 | 0.977 | 0.980 | 0.976 | 0.976 | 0.970 | |

Average | 0.813 | 0.790 | 0.706 | 0.774 | 0.762 | 0.780 | 0.780 | |

Improvement | 0.023 | 0.107 | 0.039 | 0.051 | 0.033 | 0.033 | 0.057 |

**Table 8.**The specificity of the proposed CAV compared with CAC, SVM, k-NN, DNN-1, DNN-2 and naïve Bayes (the bold number indicates the maximum value).

Dataset | Classification Accuracy (K = 10) | Average Improvement | ||||||
---|---|---|---|---|---|---|---|---|

CAV | CAC | SVM | kNN | DNN-1 | DNN-2 | Naïve Bayes | ||

Analcatdata boxing2 | 0.804 | 0.857 | 0.786 | 0.640 | 0.528 | 0.603 | 0.673 | |

Breast w | 0.988 | 0.988 | 0.947 | 0.951 | 0.936 | 0.936 | 0.914 | |

Congressional voting | 0.981 | 0.973 | 0.881 | 0.920 | 0.937 | 0.953 | 0.953 | |

Ecoli | 0.850 | 0.921 | 0.000 | 0.978 | 0.973 | 0.948 | 0.969 | |

Hayes-Roth | 0.887 | 0.873 | 0.718 | 0.819 | 0.777 | 0.823 | 0.837 | |

Mammographic mass | 0.820 | 0.799 | 0.861 | 0.816 | 0.801 | 0.791 | 0.818 | |

Monk’s problems 2 | 0.675 | 0.857 | 0.807 | 1.000 | 0.485 | 0.583 | 0.000 | |

Pima indian diabetes | 0.727 | 0.780 | 0.750 | 0.762 | 0.759 | 0.763 | 0.798 | |

Prnn synth | 0.872 | 0.767 | 0.836 | 0.899 | 0.799 | 0.789 | 0.847 | |

Sonar | 0.663 | 0.753 | 0.850 | 0.782 | 0.807 | 0.846 | 0.768 | |

Soybean small | 1.000 | 1.000 | 1.000 | 0.993 | 1.000 | 1.000 | 1.000 | |

SPECTheart | 0.800 | 0.750 | 0.627 | 0.649 | 0.707 | 0.707 | 0.627 | |

Tae | 0.795 | 0.789 | 0.737 | 0.739 | 0.735 | 0.724 | 0.768 | |

Thyroid disease | 0.960 | 0.960 | 0.809 | 0.975 | 0.935 | 0.951 | 0.971 | |

Wisconsin | 0.983 | 0.983 | 0.951 | 0.959 | 0.936 | 0.936 | 0.921 | |

Average | 0.854 | 0.870 | 0.771 | 0.859 | 0.808 | 0.824 | 0.791 | |

Improvement | −0.016 | 0.083 | −0.005 | 0.046 | 0.030 | 0.063 | 0.040 |

**Table 9.**The G-mean of the proposed CAV compared with CAC, SVM, k-NN, DNN-1, DNN-2, and naïve Bayes (the bold number indicates the maximum value).

Dataset | Classification Accuracy (K = 10) | Average Improvement | ||||||
---|---|---|---|---|---|---|---|---|

CAV | CAC | SVM | kNN | DNN-1 | DNN-2 | Naïve Bayes | ||

Analcatdata boxing2 | 0.712 | 0.655 | 0.736 | 0.643 | 0.571 | 0.634 | 0.674 | |

Breast w | 0.977 | 0.973 | 0.963 | 0.965 | 0.959 | 0.959 | 0.950 | |

Congressional voting | 0.970 | 0.954 | 0.922 | 0.939 | 0.952 | 0.953 | 0.949 | |

Ecoli | 0.843 | 0.803 | 0.000 | 0.734 | 0.745 | 0.846 | 0.738 | |

Hayes-Roth | 0.841 | 0.827 | 0.657 | 0.776 | 0.691 | 0.748 | 0.788 | |

Mammographic mass | 0.827 | 0.825 | 0.809 | 0.795 | 0.824 | 0.816 | 0.806 | |

Monk’s problems 2 | 0.520 | 0.585 | 0.827 | 0.814 | 0.651 | 0.728 | 0.000 | |

Pima indian diabetes | 0.743 | 0.710 | 0.690 | 0.683 | 0.724 | 0.723 | 0.728 | |

Prnn synth | 0.831 | 0.787 | 0.844 | 0.827 | 0.835 | 0.833 | 0.844 | |

Sonar | 0.737 | 0.740 | 0.874 | 0.807 | 0.802 | 0.812 | 0.690 | |

Soybean small | 1.000 | 1.000 | 1.000 | 0.990 | 1.000 | 1.000 | 1.000 | |

SPECTheart | 0.748 | 0.612 | 0.733 | 0.638 | 0.713 | 0.713 | 0.733 | |

Tae | 0.684 | 0.674 | 0.587 | 0.595 | 0.612 | 0.452 | 0.628 | |

Thyroid disease | 0.943 | 0.942 | 0.808 | 0.967 | 0.948 | 0.931 | 0.956 | |

Wisconsin | 0.977 | 0.974 | 0.966 | 0.970 | 0.962 | 0.962 | 0.953 | |

Average | 0.824 | 0.804 | 0.761 | 0.810 | 0.799 | 0.807 | 0.762 | |

Improvement | 0.020 | 0.063 | 0.014 | 0.024 | 0.016 | 0.061 | 0.040 |

Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |

© 2023 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).

## Share and Cite

**MDPI and ACS Style**

Wanna, P.; Wongthanavasu, S. An Efficient Cellular Automata-Based Classifier with Variance Decision Table. *Appl. Sci.* **2023**, *13*, 4346.
https://doi.org/10.3390/app13074346

**AMA Style**

Wanna P, Wongthanavasu S. An Efficient Cellular Automata-Based Classifier with Variance Decision Table. *Applied Sciences*. 2023; 13(7):4346.
https://doi.org/10.3390/app13074346

**Chicago/Turabian Style**

Wanna, Pattapon, and Sartra Wongthanavasu. 2023. "An Efficient Cellular Automata-Based Classifier with Variance Decision Table" *Applied Sciences* 13, no. 7: 4346.
https://doi.org/10.3390/app13074346