Next Article in Journal
Autonomous Indoor Scanning System Collecting Spatial and Environmental Data for Efficient Indoor Monitoring and Control
Next Article in Special Issue
Membrane System-Based Improved Neural Networks for Time-Series Anomaly Detection
Previous Article in Journal
Influences of Water Content in Feedstock Oil on Burning Characteristics of Fatty Acid Methyl Esters
Previous Article in Special Issue
Performance Assessment of SWRO Spiral-Wound Membrane Modules with Different Feed Spacer Dimensions
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Grid-Density Based Algorithm by Weighted Spiking Neural P Systems with Anti-Spikes and Astrocytes in Spatial Cluster Analysis

Business School, Shandong Normal University, East Road of Wenhua, No.88, Jinan 250014, China
*
Authors to whom correspondence should be addressed.
Processes 2020, 8(9), 1132; https://doi.org/10.3390/pr8091132
Submission received: 18 July 2020 / Revised: 20 August 2020 / Accepted: 4 September 2020 / Published: 11 September 2020
(This article belongs to the Special Issue Modeling, Simulation and Design of Membrane Computing System)

Abstract

:
In this paper, we propose a novel clustering approach based on P systems and grid- density strategy. We present grid-density based approach for clustering high dimensional data, which first projects the data patterns on a two-dimensional space to overcome the curse of dimensionality problem. Then, through meshing the plane with grid lines and deleting sparse grids, clusters are found out. In particular, we present weighted spiking neural P systems with anti-spikes and astrocyte (WSNPA2 in short) to implement grid-density based approach in parallel. Each neuron in weighted SN P system contains a spike, which can be expressed by a computable real number. Spikes and anti-spikes are inspired by neurons communicating through excitatory and inhibitory impulses. Astrocytes have excitatory and inhibitory influence on synapses. Experimental results on multiple real-world datasets demonstrate the effectiveness and efficiency of our approach.

1. Introduction

Spiking neural P systems (SN P in short) are a kind of parallel and distributed neural-like computation model in the field of membrane computing [1,2]. SN P systems, which are inspired by neural cells [3], have a series of spikes and information processing rules, called firing and forgetting rules [4]. Inspired by different biological phenomena and mathematical motivations, several families of SN P systems have been constructed, such as SN P systems with anti-spikes [5], SN P systems with weight [6], SN P systems with astrocyte [7], stochastic numerical P systems [8], SN P systems with threshold [9], numerical spiking neural P systems [10], double layers self-organized spiking neural P systems [11], SN P systems with rules on synapses [12], SN P systems with structural plasticity [13]. For applications, SN P systems are used to design logic gates, logic circuits [7] and operating systems [14], perform basic arithmetic operations [15], solve combinatorial optimization problems [16], and realize fingerprint recognition [11]. Pǎun who initiated the P systems pointed out that solving real problems by membrane computing needs to be addressed [17]. The comparative analysis of dynamic behaviors of a hybrid algorithm indicates that the combination of evolutionary computation with P systems can produce a better algorithm for balancing exploration and exploitation [18,19,20]. However, the hybrid algorithm does not use objects and rules defined by P systems. On account of the P system is still in the phase of solving addition, subtraction, multiplication, and division [13]. How does the P system realize more complex and universally applicable functions? Clustering algorithm has universal applicability, and its inherent characteristics make it especially suitable for parallel operation through P system to realize the possibility of reducing time complexity. The whole process of clustering algorithm proposed in this paper is implemented through changes of objects by rules in membranes. In which, objects encode data. Membrane rules working on objects achieve the clustering goal. Real-world datasets always have multiple attributes, so these datasets often include high-dimensional data with features of multiple dimensions. Grid-based clustering is usually used for the more complex and high-dimension data. Data space is partitioned into certain number of cells. Cells are basic units for clustering operations [21]. OPTIGRID [22] is designed to obtain an optimal grid partitioning. CLIQUE is probably the most intuitive and comprehensive clustering technique [23]. The shifting grid approach (SHIFT) has been reported to be somehow similar to the sliding window technique. However, grid-based clustering methods face the curse of dimensionality. In other words, as the dimensionality increases, the number of grids increases exponentially. In order to solve this problem, some methods proposed to select two features to form a plane before meshing. In [24], random projection is used to reduce the dimensionality of the data. The dynamic feature mask is proposed to deal with the feature selection problem [25]. However, the features selected through these methods are not always the most distinguishable. In order to further improve the clustering effect, we propose to select features based on the data distribution histogram of each dimension. Inspired by AGRID [26], we combine the grid-based clustering method with the density-based clustering method. Based on the above considerations, this paper develops a hybrid optimization method, grid-density based algorithm by weighted SN P systems with anti-spike and astrocyte. Characteristic of each dimension is calculated and compared by rules independently in different membranes synchronously. Communications among membranes is utilized to explore clusters. Experimental results on multiple real-world datasets demonstrate the effectiveness and efficiency of our approach.

2. Methods

2.1. Weighted Spiking Neural P Systems with Anti-Spikes and Astrocytes

Weighted spiking neural P systems with anti-spikes and astrocytes (called WSNPA2) of degree m 1 is a construct of the form
Π = ( O , σ 1 , , σ m , s y n , a s t 1 , , a s t k , I n , O u t )
where, O is the set of spikes, O = { a , a ¯ } , a is spike, a ¯ is anti-spike. The empty string is denoted by λ ; σ 1 , σ 2 , , σ m are neurons, m is the degree of neurons, of the form σ i = ( n i , R i ) , 1 i m , Where, n i is the initial number of spikes contained in σ i , R i is a finite set of rules with: (1) E / s c s , s is spikes or anti-spikes, c is the number of spikes in the rule, c 1 , E is a regular expression over a or a; (2) s e λ , e is the number of spikes, e 1 . s y n { 1 , 2 , , m } × { 1 , 2 , , m } × ω are synapses between neurons, ω is the weight on synapse (i, j), ω Z . For each (i, j), there is at most one synapse (i, j, ω ). A rule E / s c s is applied as follows. If neuron σ i contains r spikes/anti-spikes, r c , then the rule can fire, c numbers of spikes/anti-spikes are consumed, r c numbers of spikes/anti-spikes remain in σ i and one spikes/anti-spikes is released. The number of spikes/anti-spikes is multiplied by ω and pass immediately to all neurons with ( i , j , ω ) s y n . s e λ is forgetting rules. e numbers of spikes/anti-spikes are omitted from the neuron immediately.
For spikes a q and anti-spikes a p ( p , q Z are numbers of spikes and anti-spikes), an annihilation rule a a ¯ λ is applied in a maximal manner. a q p or a ( a ) p q remain for the next step, provided that q p or p q , respectively. a s t 1 , , a s t k are astrocytes, of the form a s t i = ( s y n a s t i , t i ) , where s y n a s t i s y n is the subset of synapses controlled by the astrocyte, t i is the threshold of the astrocyte. Suppose that there are k spikes passing along the neighboring synapses s y n a s t i . If k t i , then a s t i has an inhibitory influence on s y n a s t i , and the k spikes are transformed into one spike by a k a . a will be sent into the neuron connected to a s t i . Otherwise, k < t i , then a s t i has an excitatory influence on s y n a s t i , all spikes survive and reach their destination neurons.
I n , O u t { 1 , 2 , , m } indicate the input and output neurons, respectively.

2.2. Grid-Density Based Clustering Algorithm for Multidimensional Dataset

2.2.1. Identify the Two Well-Informed Features

Generally, in grid-based methods, the computations will grow exponentially with high dimensions, because of the evaluations should be done over all grid points. For example, a cluster analysis with N dimensions and L grid partitions in each dimension, would result in L N grids. To avoid this curse of dimensionality problem, we try to project data in actual feature space into a 2D space, aim to discover the initial locations of spike clusters in a plane. The plane comprised by the two well-informed features n i , n j N will be covered by a L × L lattice of grids with M data objects X p ( p = 1 , , M ) .
At first, each dimension of objects is partitioned into K = [ M ] bins, B = { b 1 , b 2 , , b k } is as
c i ( b k ) = { X p , p = 1 , , M , X p i b k < X p i b k ; b k , b k B , b k b k }
x p i is the value of feature n i in data pattern X p and . is the cardinality operator representing the number of elements in a set. For each attribute of the Wine data set, we draw a histogram according to the above rules. Since the data set has 13 attributes, we get 13 feature maps. Figure 1 depicts this histogram for the 13 features of the known Wine data set. Because the number of peaks means the ability to divide the data in this data set by this feature, we take ε , which represents the number of peaks, as the measurement standard. As is shown in Figure 1, the ε in these histograms are 3,2,1,2,1,2,2,4,3,3,3,4,5, respectively. According to these values of ε , the features n 8 , n 12 and n 13 are selected. If there are same maximum values of ε , we divide each dimension of the data object into K / 2 bins to recalculate the number of peaks until two well-informed features are selected. These features will then be used to do the cluster analysis.
ε ( n i ) = { c i ( b k ) : c i ( b k ) > c i ( b k 1 ) a n d c i ( b k ) > c i ( b k + 1 ) }

2.2.2. Clustering by Grid-Density Based Algorithm

The plane comprised by the two well-informed features will be covered by a H = L × L lattice of grids. Grids are denoted by G = { g 1 , g 2 , , g H } . C ( g h ) , h { 1 , 2 , , H } is the number of data X p partitioned in grid g h according to (3).
C ( g h ) = { X p : p = 1 , , M , X p g h , g h G }
Next, non-dense grids are deleted. A grid is dense if C ( g h ) > θ , θ N + is a threshold defined before computation. The threshold is initialized to 2% of the number of data, and on this basis, it floats upwards by 10% and downwards by 1%. Several experiments are performed to select the threshold that makes the clustering effect the best. After getting the initial members of grid graph G, G is refined by finding out dense grid. Those sparse girds are discarded. The refined grid graph is defined as:
G r = { g h C ( g h ) > θ } G
Each grid g h G r has 4 neighbors connected with it as shown in Figure 2. When there is no dense grid in the cluster that can be connected to the grid, a cluster is formed. A cluster is a set of neighbors of dense grids. The process of clustering algorithm is shown in Algorithm 1 below.
Algorithm 1: Algorithm: Grid-Density based clustering algorithm.
Inputs: Ω = { X p i , 1 p M , 1 i N } , H = L × L , θ : d e n s i t y t h r e s h o l d
Outputs: C S = { C S 1 , C S 2 , , C S t }
Begin
for all features n i , i = 1 , 2 , , N
use K = [ M ] bins to partition the feature n i
obtain the number of data in each bin B = { b 1 , , b k } by (1)
compute the effectiveness measure ε ( n i ) for n i by (2)
rank ε ( n i )
get the two top-ranked feature s
project data patterns into H = L × L grid s
obtain the capability C ( g H ) of each grid by (3)
select dense grid by (4)
form cluster set by combing neighbor dense grids
return the t clusters, C S = { C S 1 , C S 2 , , C S t }
End

2.3. Multi-WSNPA2 Design for Grid-Density Based Clustering

2.3.1. Grid-Density Based Clustering by Multi-WSNPA2

In this section, the weighted spiking neural P system with anti-spikes and astrocytes is designed for grid-density based clustering. Objects in each neuron are organized as spikes and anti-spikes with real-valued numbers corresponding to Ω = { X p i , 1 p M , 1 i N } . Feature selection and cluster analysis are implemented by rules of WSNPA2. WSNPA2 is divided into three subsystems: feature selection, effectiveness comparison and clustering. The structure of WSNPA2 is shown in Figure 3, where ovals represent neurons, rhombic stand for astrocytes and arrows indicate channels. WSNPA2 for grid-density based clustering algorithm is described as the following construct
Π = ( O , σ S 1 , σ S 2 , σ S 3 , s y n , R , a s t S 1 , a s t S 3 , σ i n 1 , , σ i n N , σ o u t 1 , , σ o u t t )
where, O = { a , a ¯ } . At beginning, the input neuron contains x p i numbers of spike a; σ S 1 stands for neurons in feature selection subsystems, σ S 1 = { D M i z , F i z , F S i z } , 1 i N , 1 z = z [ M ] , 1 z 2 [ M ] / 3 ; σ S 2 represents neurons in effectiveness comparison subsystems, σ s 2 = 1 i N ( E i E C i ) { E C S } ; σ S 3 describes neurons in clustering subsystems, σ S 3 = { { C i j , i { 1 , 2 } , 1 j N } , { G g g , 1 g , g L } , C S } ; The number of astrocytes a s t S 1 in the feature selection system is N N between each two D M i z . The number of astrocytes a s t S 3 in the clustering system is L × L × 2 + 1 ; Input neurons σ i n 1 , , σ i n N are in the feature selection system. Output neurons σ o u t 1 , , σ o u t t are in the clustering system.
There are several different clustering subsystems working in parallel for different grid number H = L L , which means the whole system can output variant clustering results simultaneously. Then, the clustering results obtained by different clustering subsystems are connected according to the neighboring relationship of grid positions, so as to obtain the final clustering result. Multi-WSNPA2 makes the calculation proceed in parallel in feature selection subsystem, effectiveness comparison subsystem and clustering subsystem, respectively. The complexity is reduced from O ( n ) to O ( k n ) , where k is a constant less than 1. The detail how the complexity of grid-density based algorithm is calculated is as follows:
  • The complexity of traversing N data to form feature histograms is N.
  • The complexity of calculating the amount of data falling into each interval in the histogram is N.
  • The complexity of determining whether the amount of data in each rectangle in the histogram is greater than the left and right sides is K, where K is the number of rectangles.
  • The complexity of finding the two features with the most peaks is A, where A is the number of features in the data set.
  • The complexity of projecting data patterns into H = L × L grids is N.
  • The complexity of calculating the amount of data in each grid is N.
  • The complexity of selecting dense grid is L 2 .
  • The complexity of combing neighbor dense grids is L 2 D , where D is the number of grids removed.
The complexity of grid-density based algorithm is O ( N + N + K + A + N + N + L 2 + L 2 D ) , where K, A, L and D are constants. The simplification is O ( n ) . When we use multi-WSNPA2 to calculate the above algorithm, the data traversal in 1, 5, interval traversal in 2 and grid traversal in 6, 7 are parallel operations. So its complexity is O ( 1 + m a x d a t a K + K + K + 1 + m a x d a t a L + 1 + 1 ) , where m a x d a t a K and m a x d a t a L are the max value in interval K and max value in grid L, respectively. And the simplification is O ( k n ) , where k is a constant less than 1.
s y n represents synapse among neurons:
s y n ( D M i z , a s t S 1 ) , 1 i N , 1 z [ M ]
s y n ( F i z , a s t S 1 ) , 1 i N , 1 z [ M ]
s y n ( F i z , F S i z ) , 1 i D , 1 z 1 z 2 [ M ] / 3
s y n ( F S i z , E i ) , 1 i N , 1 z 2 [ M ] / 3
s y n ( E i , E C i ) , 1 i N
s y n ( E C i , E C S ) , 1 i N
s y n ( D M i 1 , C i 1 ) , 1 i N
s y n ( C i j , a s t S 3 ) , i { 1 , 2 } , 1 j N
s y n ( C i j , G g g ) , i { 1 , 2 } , 1 j N , 1 g , g L
s y n ( G g g , a s t S 3 ) , i { 1 , 2 } , 1 g , g L
s y n ( a s t S 3 , C S )
R is the following set of firing and forgetting rules: ( [ ] x means the rule works in neuron x, otherwise, the rule executes through all neurons)
[ a x p i a x p i , x p i < t i h ] D M i z , a x p i a , x p i > t i h
[ a f a f ] F i z , [ a f 2 f 1 a ] F S i z , f 2 f 1 > 0
[ a 2 m a 2 m + 2 ] E i , [ a 2 m + 2 / a m a m ] E i , [ a m a m ] E C i
[ a a ] E i , [ a a ] E C i , [ a m a ¯ 2 ] E C S , [ a ¯ 2 a m λ ] E i
[ a m a ¯ 2 ] E C S , [ a ¯ 2 a m λ ] E i , [ a ¯ 2 a x i j a x i j ] D M i z
[ a x i j a x i j ] C S i j , a x i j a , x i j > θ i
[ a a ] G g g , [ a 4 / a 3 a ¯ 2 ] G g g , [ a n λ ; n < θ ] G g g
[ a n a ; n θ ] G g g , [ a a ] G g g

2.3.2. Overview of Computations

Data set of M observations are codified by spikes a x p i , 1 i N , 1 p M . The computation of the P system is split in three subsystems. When a x p i arrive in neuron D M i 1 , the computation begins in parallel.
In feature selection subsystem, threshold t i h in each astrocytes a s t S 1 i h is t i h = [ h ( X i m a x X i m i n ) / [ M ] ] , 1 i N , 1 h [ M ] . If x p i > t i h , it is said that x p i belongs to the current neuron D M i z . Rule 2 add a spike in D M i z . Otherwise, a x p i pass through D M i z to find its neuron (bin) by rule 1. After all a x p i execute with rule 1 and rule 2, the peak of each dimension is chosen by rule 3 and rule 4.
All peaks of dimension i gain by spike a in neuron E i . Then, effectiveness comparison subsystem starts. The maximum number of peaks of each dimension is selected by rule 5–9. Rule 5 and 6 copy peaks a m into a 2 m + 2 and sends a m into neuron E C i for preparation. Then, different number of a m is descended one by one by rule 8. Rule 9 helps ECS collect all dimensions without the one with maximum number of peaks. The serial number of the neuron who sends out a ¯ 2 by rule 10 is chosen as the first dimension for clustering. Other effectiveness comparison subsystem will work in the same way except that the chosen dimension is deleted by rule 11.
Rule 12 activates the input neurons of the two selected features. The clustering subsystem begins. Rule 13–14 put observations into suitable bins in their own dimensions. ( θ i = [ i 1 X i m a x X i m i n ) / L ] ) . Then, rule 15 select the grid who has two spikes. It is chosen as initial grid for cluster. Rule 16 activates the input neurons of the two selected features again. Rule 17–19 finds dense grids. Rule 12–16 will continue to work until there are no spike input. The clustering result is obtained by the serial number of neurons with a output by rule 19.

3. Results and Discussion

The experiments set out to investigate the performance of the proposed approach compared to classical clustering algorithms. We conduct experiments using ten real-world datasets and all datasets are from UCI (https://archive.ics.uci.edu/ml/datasets.php). Table 1 summarizes these data sets, ordered in their number of attributes.
The amount of necessary resources to define multi-WSNPA2 of grid-density based clustering for the ten datasets are shown in Table 2.
To compare the algorithm with k-means, AHC (agglomerative hierarchical clustering) and two other new algorithms in more precise notion, their clustering performance in terms of accuracy is depicted in Table 3. This AHC uses the ward linkage 27 which is appropriate for Euclidean distance. The accuracy of clusters evaluates the right objects of clusters in each class.
Clearly, the accuracy is comparable to k-means, AHC and two other new algorithms and even better as its averages (in bold-face) show. This means that the clustering effect of our method is better than other algorithms.
The intrinsic maximal parallelism of P systems can be exploited to produce a speed-up for solutions. In order to achieve this, the model needs several ingredients, among them the ability to generate an exponential workspace in polynomial time. The computational cost is more than k-means as the last stage of its algorithm is repetitive. Table 4 compares the time consuming against k-means and AHC where the fastest (in average) is shown in boldface. The results show that our algorithm can cluster faster on most data sets.

4. Conclusions

This paper discusses the use of weighted spiking neural P system with anti-spike and astrocyte to appropriately develop a novel hybrid method with grid-density based algorithm for solving clustering problems which first projects the data patterns on a two-dimensional space to overcome the curse of dimensionality problem. To choose these two well-informed features, a simple and fast feature selection algorithm is proposed. Then, through meshing the plane with grid lines and deleting sparse grids, clusters are found out. In particular, we present weighted spiking neural P systems with anti-spikes and astrocyte (WSNPA2 in short) to implement grid-density based approach in parallel. Each neuron in weighted SN P system contains a spike, which can be expressed by a computable real number. Spikes and anti-spikes are inspired by neurons communicating through excitatory and inhibitory impulses. Astrocytes have excitatory and inhibitory influence on synapses. Characteristic of each dimension is calculated and compared by rules independently in different membranes synchronously. Communications among membranes is utilized to explore clusters. Experimental results on multiple real-world datasets demonstrate the effectiveness and efficiency of our approach to classical k-means, AHC and two other new algorithms.

Author Contributions

Conceptualization, J.X.; methodology, D.K.; formal analysis, Y.W.; writing-original draft, D.K. and X.W.; writing-review and editing, X.L. and J.Q.; supervision, J.X. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported in part by the National Natural Science Foundation of China (No. 61802234, 61876101), Natural Science Foundation of Shandong Province (No. ZR2019QF007), and China Postdoctoral Project (No. 2017M612339).

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Song, T.; Gong, F.; Liu, X. Spiking Neural P Systems With White Hole Neurons. IEEE Trans. Nanobiosci. 2016, 15, 666–673. [Google Scholar] [CrossRef] [PubMed]
  2. Song, T.; Pan, L.; Wu, T. Spiking Neural P Systems With Learning Functions. IEEE Trans. Nanobiosci. 2019, 18, 176–190. [Google Scholar] [CrossRef] [PubMed]
  3. Wu, T.; Păun, A.; Zhang, G.; Neri, F. Spiking Neural P Systems With Polarizations. IEEE Trans. Neural Netw. Learn. Syst. 2018, 29, 3349–3360. [Google Scholar] [PubMed]
  4. Song, T.; Zeng, X.; Zheng, P.; Jiang, M.; Rodríguez-Patón, A. A Parallel Workflow Pattern Modeling Using Spiking Neural P Systems With Colored Spikes. IEEE Trans. Nanobiosci. 2018, 17, 474–484. [Google Scholar] [CrossRef]
  5. Song, T.; Liu, X.; Zeng, X. Asynchronous Spiking Neural P Systems with Anti-Spikes. Neural Process. Lett. 2015, 42, 633–647. [Google Scholar] [CrossRef]
  6. Wang, J.; Hoogeboom, H.; Pan, L.; Păun, G.; Perez-Jimenez, M. Spiking neural P systems with weights. Neural Comput. 2010, 22, 2615–2646. [Google Scholar] [CrossRef]
  7. Frias, T.; Diaz, C.; Sanchez, G.; Garcia, G.; Avalos, G.; Perez, H. Four Single Neuron Arithmetic Circuits based on SN P Systems with Dendritic Behavior, Astrocyte-like control and rules on the synapses. IEEE Lat. Am. Trans. 2018, 16, 38–45. [Google Scholar] [CrossRef]
  8. Yang, J.; Peng, H.; Luo, X.; Wang, J. Stochastic Numerical P Systems With Application in Data Clustering Problems. IEEE Access 2020, 8, 31507–31518. [Google Scholar] [CrossRef]
  9. Zeng, X.; Zhang, X.; Song, T.; Pan, L. Spiking neural P systems with thresholds. Neural Comput. 2014, 26, 1340–1361. [Google Scholar] [CrossRef]
  10. Wu, T.; Pan, L.; Yu, Q.; Tan, K.C. Numerical Spiking Neural P Systems. IEEE Trans. Neural Netw. Learn. Syst. 2020, 1–15. [Google Scholar] [CrossRef]
  11. Ma, T.; Hao, S.; Wang, X.; Rodríguez-Patón, A.A.; Wang, S.; Song, T. Double Layers Self-Organized Spiking Neural P Systems With Anti-Spikes for Fingerprint Recognition. IEEE Access 2019, 7, 177562–177570. [Google Scholar] [CrossRef]
  12. Song, T.; Pan, L. Spiking Neural P Systems With Rules on Synapses Working in Maximum Spikes Consumption Strategy. IEEE Trans. Nanobiosci. 2015, 14, 38–44. [Google Scholar] [CrossRef] [PubMed]
  13. Cabarle, F.; Adorna, H.; Perez-Jimenez, M.; Song, T. Spiking neural P systems with structural plasticity. Neural Comput. Appl. 2015, 26, 1905–1917. [Google Scholar] [CrossRef]
  14. Adl, A.; Badr, A.; Farag, I. Towards a Spiking Neural P Systems OS. arXiv 2010, arXiv:1012.0326. [Google Scholar]
  15. Liu, X.; Li, Z.; Liu, J.; Liu, L.; Zeng, X. Implementation of arithmetic operations with time-free spiking neural P systems. IEEE Trans. Nanobiosci. 2015, 14, 617–624. [Google Scholar] [CrossRef] [PubMed]
  16. Zhang, G.; Rong, H.; Neri, F.; Perez-Jimenez, M. An optimization spiking neural P system for approximately solving combinatorial optimization problems. Int. J. Neural Syst. 2014, 24, 1440006. [Google Scholar] [CrossRef]
  17. Păun, G. Computing with Membranes. J. Comput. Syst. Sci. 2000, 61, 108–143. [Google Scholar] [CrossRef] [Green Version]
  18. Nishida, T. Membrane algorithm with Brownian sub algorithm and genetic sub algorithm. Int. J. Found. Comput. Sci. 2007, 18, 1353–1360. [Google Scholar] [CrossRef]
  19. Zhang, G.; Marian, G.; Wu, C. A quantum-inspired evolutionary algorithm based on P systems for Knapsack problem. Fundam. Inform. 2008, 87, 93–116. [Google Scholar]
  20. Huang, L.; Suh, I.H.; Abraham, A. Dynamic multi-objective optimization based on membrane computing for control of time-varying unstable plants. Inf. Sci. 2011, 181, 2370–2391. [Google Scholar] [CrossRef]
  21. Agrawal, R.; Gehrke, J.; Gunopulos, D.; Raghavan, P. Automatic subspace clustering of high dimensional data for data mining applications. In Proceedings of the ACM SIGMOD on Management of Data, Seattle, WA, USA, 1–4 June 1998. [Google Scholar]
  22. Zeng, X.; Song, T.; Zhang, X.; Pan, L. Performing Four Basic Arithmetic Operations With Spiking Neural P Systems. IEEE Trans. Nanosci. 2012, 11, 366–374. [Google Scholar]
  23. Zhao, Y.; Song, J. AGRID: An efficient algorithm for clustering large high dimensional data sets. In Proceedings of the Pacific-Asia Conference on Knowledge Discovery and Data Mining, Seoul, Korea, 30 April–2 May 2003. [Google Scholar]
  24. Rathore, P.; Kumar, D.; Bezdek, J.C.; Rajasegarar, S.; Palaniswami, M. A Rapid Hybrid Clustering Algorithm for Large Volumes of High Dimensional Data. IEEE Trans. Knowl. Data Eng. 2019, 31, 641–654. [Google Scholar] [CrossRef]
  25. Fahy, C.; Yang, S. Dynamic Feature Selection for Clustering High Dimensional Data Streams. IEEE Access 2019, 7, 127128–127140. [Google Scholar] [CrossRef]
  26. Monti, A.; Ponci, F. Power grids of the future: Why smart means complex. In Proceedings of the 2010 Complexity in Engineering, Rome, Italy, 22–24 February 2010; pp. 7–11. [Google Scholar]
  27. Shang, R.; Zhang, W.; Li, F.; Jiao, L.; Stolkin, R. Multi-objective artificial immune algorithm for fuzzy clustering based on multiple kernels. Swarm Evol. Comput. 2019, 50, 100485. [Google Scholar] [CrossRef]
  28. Rashno, E.; Minaei-Bidgoli, B.; Guo, Y. An effective clustering method based on data indeterminacy in neutrosophic set domain. Eng. Appl. Artif. Intell. 2020, 89, 103411. [Google Scholar] [CrossRef] [Green Version]
Figure 1. Histogram for the 13 features of Wine data set.
Figure 1. Histogram for the 13 features of Wine data set.
Processes 08 01132 g001
Figure 2. Neighbors of grid g h G r .
Figure 2. Neighbors of grid g h G r .
Processes 08 01132 g002
Figure 3. Structure of WSNPA2 for grid-density based clustering algorithm.
Figure 3. Structure of WSNPA2 for grid-density based clustering algorithm.
Processes 08 01132 g003
Table 1. Ten real-world datasets of UCI.
Table 1. Ten real-world datasets of UCI.
Data SetNumber of AttributesNumber of ClassesNumber of Objects
Haberman32306
Iris43150
Thyroid54215
Ecoli78336
Diabetes83768
Breast93699
Glass96214
Wine133178
Vehicle184846
Ionosphere342351
Table 2. The amount of necessary resources to define multi-WSNPA2 of the ten datasets.
Table 2. The amount of necessary resources to define multi-WSNPA2 of the ten datasets.
Data SetParallel StepsInitial CellsInitial ObjectsNumber of Rules
Haberman3145236,517929
Iris15549 2.08 × 10 3 617
Thyroid22373 2.76 × 10 4 1097
Ecoli344128 1.18 × 10 3 2389
Diabetes778222 2.76 × 10 5 6170
Breast69423519,3316183
Glass222132 2.17 × 10 4 1958
Wine190173 1.60 × 10 5 2340
Vehicle8665241,581,50715,268
Ionosphere359618 2.64 × 10 3 11,628
Table 3. The accuracy of clusters evaluates the right objects of clusters in each class.
Table 3. The accuracy of clusters evaluates the right objects of clusters in each class.
Data SetThe AlgorithmK-MeansAHCMAFC [27]Rashno E. et al. [28]
Haberman47.82%48.64%50.06%--
Iris93.33%89.79%91.54%90.7%94.66%
Breast93.99%96.06%95.83%-91.41%
Wine97.75%95.20%97.73%-83.14%
Ionosphere72.93%70.20%70.54%72.4%-
Average81.16%79.97%81.14%--
Table 4. Comparison of time consuming among the three algorithms.
Table 4. Comparison of time consuming among the three algorithms.
Data SetThe AlgorithmK-MeansAHC
Haberman0.07 s0.08 s0.07 s
Iris0.03 s0.05 s0.08 s
Thyroid0.05 s0.04 s0.06 s
Ecoli0.07 s0.04 s0.09 s
Breast0.15 s0.05 s2.48 s
Glass0.05 s0.07 s0.07 s
Wine0.04 s0.07 s0.05 s
Vehicle0.19 s0.14 s0.46 s
Ionosphere0.08 s0.06 s0.14 s
Average0.08 s0.07 s0.38 s

Share and Cite

MDPI and ACS Style

Kong, D.; Wang, Y.; Wu, X.; Liu, X.; Qu, J.; Xue, J. A Grid-Density Based Algorithm by Weighted Spiking Neural P Systems with Anti-Spikes and Astrocytes in Spatial Cluster Analysis. Processes 2020, 8, 1132. https://doi.org/10.3390/pr8091132

AMA Style

Kong D, Wang Y, Wu X, Liu X, Qu J, Xue J. A Grid-Density Based Algorithm by Weighted Spiking Neural P Systems with Anti-Spikes and Astrocytes in Spatial Cluster Analysis. Processes. 2020; 8(9):1132. https://doi.org/10.3390/pr8091132

Chicago/Turabian Style

Kong, Deting, Yuan Wang, Xinyan Wu, Xiyu Liu, Jianhua Qu, and Jie Xue. 2020. "A Grid-Density Based Algorithm by Weighted Spiking Neural P Systems with Anti-Spikes and Astrocytes in Spatial Cluster Analysis" Processes 8, no. 9: 1132. https://doi.org/10.3390/pr8091132

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop