Next Article in Journal
Quality of Information and Marketing of Rural Tourism Experience
Previous Article in Journal
Profile of Self-Care Capacity and Alcohol Use in Elderly Brazilians during the COVID-19 Outbreak: An Online Study
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:

A Rule-Based Method to Locate the Bounds of Neural Networks

Department of Informatics and Telecommunications, University of Ioannina, 47100 Arta, Greece
Author to whom correspondence should be addressed.
Knowledge 2022, 2(3), 412-428;
Submission received: 24 May 2022 / Revised: 28 July 2022 / Accepted: 9 August 2022 / Published: 11 August 2022


An advanced method of training artificial neural networks is presented here which aims to identify the optimal interval for the initialization and training of artificial neural networks. The location of the optimal interval is performed using rules evolving from a genetic algorithm. The method has two phases: in the first phase, an attempt is made to locate the optimal interval, and in the second phase, the artificial neural network is initialized and trained in this interval using a method of global optimization, such as a genetic algorithm. The method has been tested on a range of categorization and function learning data and the experimental results are extremely encouraging.

1. Introduction

Artificial neural networks (ANNs) are programming tools [1,2] based on a series of parameters that are commonly called weights or processing units. They have been used in a variety of problems from different scientific areas, such as physics [3,4,5], solving differential equations [6,7], agriculture [8,9], chemistry [10,11,12], economics [13,14,15], medicine [16,17], etc. A common way to express a neural network is a function N ( x , w ) , with x the input vector (commonly called the pattern) and w the weight vector. A method that trains a neural network should be used to estimate the vector w for a certain problem. The training procedure can also be formulated as an optimization problem, where the target is to minimize the so-called error function:
E N x , w = i = 1 M N x i , w y i 2
In Equation (1), the set x i , y i , i = 1 , , M , is the dataset used to train the neural network, with y i being the actual output for the point x i . The neural network N ( x , w ) can be modeled as a summation of processing units, as proposed in [18]:
N x , w = i = 1 H w ( d + 2 ) i ( d + 1 ) σ j = 1 d x j w ( d + 2 ) i ( d + 1 ) + j + w ( d + 2 ) i
with H the number of processing units in the neural network and d the dimension of vector x . The function σ ( x ) is the sigmoid function defined as:
σ ( x ) = 1 1 + exp ( x )
From Equation (2), one can obtain that the dimension of the weight vector w is computed as: n = ( d + 2 ) H . The function of Equation (1) has been minimized with a variety of optimization methods during the past years such as: the back propagation method [19,20], the RPROP method [21,22,23], quasi-Newton methods [24,25], simulated annealing [26,27], genetic algorithms [28,29], particle swarm optimization [30,31] etc. In addition, various researchers have worked on the initialization of the weights of neural networks, such as initialization using decision trees [32], an initialization method based on Cauchy’s inequality [33], a method based on discriminant learning [34], etc. Another topic that has attracted the interest of many researchers is weight decaying, which is a regularization method that adapts the weights of the network aiming to avoid the overfitting problem. Several papers have appeared in this area with methods such as those with positive correlation [35], the SarProp algorithm [36], the incorporation of pruning techniques [37], etc. In addition, more advanced and more recent techniques from the area of computational intelligence have been proposed for neural network training such as the differential evolution method [38,39], the construction of neural networks with ant colony optimization [40], the construction of neural networks using grammatical evolution to solve differential equations [41], etc. Furthermore, due to development of GPU units, a lot of works have been published that take advantage of these processing units [42,43].
The present work proposes an innovative interval generation technique for the initialization and training of artificial neural network parameters. This new method has its roots in interval methods [44,45,46]. In the current work, using arithmetic intervals, a set of rules for dividing the initial interval for the parameters of an artificial neural network is constructed. The construction is carried out using a hybrid genetic algorithm, in which chromosomes are the set of division rules. After the termination of the genetic algorithm, the artificial neural network is initialized in the interval resulting from the application of the optimal partitioning rules and then trained using a genetic algorithm.
The method used has two objectives: the first objective is to detect a small interval of initialization for the parameters of the artificial neural network and the second objective is to accelerate the training of the network. In the first target, using information from the training data, the algorithm will make an attempt to identify the interval that will ultimately give better results. In the second objective, once a small-value interval has been detected, a global optimization method can be used more efficiently to detect the lowest value of the network error.
The proposed method is expected to achieve significant results since in principle it has all the advantages of genetic algorithms, such as tolerance for errors, possibilities for parallel implementation, the efficient exploration of the research space, etc. In addition, the first phase of the method will reduce the volume of the possible values for the weights so that in the second phase the search for the global minimum of the network error function will become more efficient and faster.
The proposed methodology can even be applied to different types of artificial neural networks such as recurrent neural networks [47,48]. A simple recurrent neural network can be expressed as single neural cell with a single input, a single output and a state (also known as the memory of the cell). Given the input of the cell x ( t ) at step t and the previous state of the cell h ( t 1 ) at step t 1 , the updated state of the cell h ( t ) is estimated as shown in the equation:
h ( t ) = f W h h h ( t 1 ) + W x h x ( t ) + b h
y ( t ) = σ W h y h ( t ) + b y
where the f ( x ) function is usually the softmax function. The proposed method can be used here to estimate a promising bounding box for the vector parameters W and b of the network before any other training method is applied.
The rest of this article is as follows: in Section 2 the proposed method is discussed in detail, in Section 3 the experimental datasets as well as the results from the application of the proposed method are provided and finally in Section 4 some conclusions and guidelines for future enhancements are presented.

2. Method Description

The proposed method consists of two major steps: in the first step, the construction of partition rules for the initial value interval for the parameters of the artificial neural network is made, and in the second step, the artificial neural network is initialized in the optimal space resulting from the first step and training takes place. The training is performed through a second genetic algorithm. In the first genetic algorithm, the chromosomes are sets of partition rules for the initial value interval of the artificial neural network, and in the second genetic algorithm, the chromosomes are the parameters of the artificial neural network. It is obvious that this is a time-consuming process and modern parallel techniques such as the OpenMP [49] library must be used to accelerate it. The first genetic algorithm is analyzed in Section 2.1 and the second in Section 2.5.

2.1. Locating the Best Rules

Firstly, we introduce the rule set I n where:
I n = l 1 , r 1 , l 2 , r 2 , , l n , r n
where l i 0 , 1 , r i 0 , 1 and i = 1 , , n . The set I n defines the set of partition rules for a function defined as
f : S R , S R n
with S:
S = a 1 , b 1 a 2 , b 2 a n , b n
If l i = 1 then a i = a i 2 and if r i = 1 then b i = b i 2 . For example, consider the Rastrigin function:
f ( x ) = x 1 2 + x 2 2 cos 18 x 1 cos 18 x 2 , x [ 1 , 1 ] 2
Also consider the set I 2 = 1 , 0 , 0 , 1 . The produced bounding box for the Rastrigin function is now S = 0.5 , 1 × 1 , 0.5 .
Subsequently, we introduce the extended set C K n as a set of production rules defined as:
R K n = I n ( 1 ) , I n ( 2 ) , , I n ( K ) ,
where I n ( i ) , i = 1 , , K , are the rule sets of Equation (6). For example, let K = 2 for the Rastrigin function and R 22 = 0 , 1 , 1 , 0 , 1 , 0 , 1 , 1 . The final bounding box is considered after applying the sets 0 , 1 , 1 , 0 and 1 , 0 , 1 , 1 in the original box S. The computation steps are:
  • Apply 0 , 1 , 1 , 0 to S, yielding S = 0.5 , 1 × 1 , 0.5 .
  • Apply 1 , 0 , 1 , 1 to S , yielding S = 0.25 , 1 × 0.5 , 0.25 .
We consider chromosomes in the form of Equation (10) for the first phase of the proposed method. The value n is the total number of parameters for the neural network. The fitness of every chromosome g is an interval f g = f g , min , f g , max . Hence, in order to compare two different intervals a = a 1 , a 2 and b = b 1 , b 2 , we incorporate the following function:
L * ( a , b ) = TRUE , a 1 < b 1 , OR a 1 = b 1 AND a 2 < b 2 FALSE , OTHERWISE
Hence, the steps of the genetic algorithm of the first phase are the following:

2.1.1. Initialization Step

  • SetK as the number of rules.
  • Set S = D , D n as the initial bounding box for the parameters of the neural network. D is considered as a positive number with D > 1 .
  • Set N C as the total number of chromosomes.
  • Set N S as the number of samples in the fitness evaluation.
  • Set P s as the selection rate, where P s 1 .
  • Set P m as the mutation rate, where P m 1 .
  • Set t = 0 as the current generation number.
  • Set N t as the maximum number of generations allowed.
  • Initialize randomly the chromosomes C i , i = 1 , , N C , as sets of Equation (10).

2.1.2. Termination Check Step

  • Set t = t + 1 .
  • If t N t , terminate.

2.1.3. Genetic Operations Step

  • For every chromosome C i , i = 1 , , N C , calculate the corresponding fitness value f i using the algorithm in Section 2.2.
  • Apply the selection operator. Initially, the chromosomes are sorted according to their fitness values. The sorting utilizes the function L * ( a , b ) of Equation (11) to compare fitness values. The best 1 P s × N c are copied to the next generation while the rest of them are substituted by offspring created through the crossover procedure. The mating parents for the crossover procedure are selected using the well-known technique of tournament selection.
  • Apply the crossover operator: For every pair of selected parents ( z , w ) , two children ( c z , c w ) are produced using the uniform crossover procedure described in Section 2.3.
  • Apply the mutation operator using the algorithm in Section 2.4.
  • Goto Termination Check Step.

2.2. Fitness Evaluation for the Rule Genetic Algorithm

The fitness value for each chromosome g is considered as an interval f = f min , f max , where f min is an estimation of the lower value obtained using the rules of the chromosome g and f max is an estimation of the maximum value. In order to calculate the fitness of every set of rules C, the following steps are performed:
  • Set f min = .
  • Set f max = .
  • Apply the rule set g to the original bounding box S. The outcome of this application is the new bounding box S g .
  • For i = 1 , , N S do
    Produce a random sample w S g .
    Calculate the training error E g = E ( N ( x , w ) ) using Equation (1).
    If E g f min then f min = E g .
    If E g f max then f max = E g .
  • EndFor
  • Return the interval f = f min , f max as the fitness of chromosome g .

2.3. Crossover for the Rule Genetic Algorithm

The crossover for the genetic algorithm of the first phase is performed using uniform crossover. For every couple ( z , w ) of selected parents, two children ( c z , c w ) are produced through the following procedure:
  • For i = 1 K do
    Let z ( i ) = l z ( i ) , r z ( i ) be the i-th item of the chromosome z.
    Let w ( i ) = l w ( i ) , r w ( i ) be the i-th item of the chromosome w.
    Produce a random number r 1 .
    If r 0.5 then
    • Set c z ( i ) = l z ( i ) , r w ( i ) .
    • Set c w ( i ) = l w ( i ) , r z ( i ) .
    • Set c z ( i ) = l w ( i ) , r z ( i ) .
    • Set c w ( i ) = l z ( i ) , r w ( i ) .
  • EndFor

2.4. Mutation for the Rule Genetic Algorithm

The steps for the mutation procedure for the genetic algorithm of the first phase are the following:
  • For i = 1 , , N C do
    Let C i = C i ( 1 ) , C i ( 2 ) , , C i ( K ) be the i-th chromosome of the population.
    For j = 1 , , K do
    • Let C i ( j ) = l i ( j ) , r i ( j ) .
    • Take r 1 a random number.
    • If r P m then alter randomly with probability 50% the l i ( j ) or the r i ( j ) part of C i ( j ) .
  • EndFor

2.5. Second Phase

In the second phase, the best chromosome g b defined as
g b = l b , 1 , r b , 1 , l b , 2 , r b , 2 , , l b , K , r b , K
is used to transform the original bounding box S = [ F , F ] ( n ) into a new box S b . The new hyperbox is defined as
S b = a g , 1 , b g , 1 × a g , 2 , b g , 2 × × a g , n , b g , n
This hyperbox will be used to bound the parameters of the neural network. The parameters of the network are trained using a genetic algorithm with the following steps:

2.5.1. Initialization Step

  • Set N C as the total number of chromosomes.
  • Set P s as the selection rate, where P s 1 .
  • Set P m as the mutation rate, where P m 1 .
  • Set t = 0 as the current generation number.
  • Set N t as the maximum number of generations allowed.
  • Initialize randomly the chromosomes C i , i = 1 , , N C , inside the bounding box S b .

2.5.2. Termination Check Step

  • Set t = t + 1 .
  • If t N t goto Local Search Step.

2.5.3. Genetic Operations Step

  • Calculate the fitness value of every chromosome.
    For i = 1 N C Do
    • Set f i = E ( N ( x , C i ) ) using Equation (1).
  • Apply the crossover operator. In this phase, the best 1 P s × N c chromosomes are transferred intact to the next generation. The rest of the chromosomes are substituted by offspring created through crossover. The selection of two parents x = x 1 , x 2 , , x n and y = y 1 , y 2 , , y n for crossover is performed using tournament selection. Having selected the parents, the offspring x ˜ and y ˜ are formed using the following:
    x i ˜ = r i x i + 1 r i y i y i ˜ = r i y i + 1 r i x i
    where r i are random numbers in [ 0.5 , 1.5 ] [43].
  • Apply the mutation operator. The mutation scheme is the same as in the work of Kaelo and Ali [50]:
    For i = 1 N C do
    • For j = 1 n do
      • Let r [ 0 , 1 ] be a random number.
      • If r P m alter the element C i j using the following:
        C i j = C i j + Δ t , b g , i C i j t = 0 C i j Δ t , C i j a g , i t = 1
        where t is a random number that takes either the value 0 or 1 and Δ ( t , y ) is calculated as:
        Δ ( t , y ) = y 1 r 1 t N t z
        where r [ 0 , 1 ] is a random number and z is a user-defined parameter.
    • EndFor
  • Goto Termination check step.

2.5.4. Local Search Step

  • Set C * as the best chromosome of the population.
  • Apply a local search procedure C * = L C * . The local search procedure used here is a BFGS method of Powell [51].

3. Experiments

The proposed method was evaluated on a series of classification and regression problems from the relevant literature. The classification problems used for the experiments were found in most cases in two internet databases:
The regression datasets were in most cases available from the Statlib URL (accessed on 23 May 2022). The proposed method was compared against a neural network trained by a genetic algorithm and the results are reported.

3.1. Experimental Datasets

The following classification datasets were used:
  • Appendicitis, a medical dataset, proposed in [53].
  • Australian dataset [54], which is related to credit card applications.
  • Balance dataset [55], which is used to predict psychological states.
  • Cleveland dataset, a dataset used to detect heart disease used in various papers [56,57].
  • Bands dataset, a printing problem used to identify cylinder bands.
  • Dermatology dataset [58], which is used for the differential diagnosis of erythemato-squamous diseases.
  • Hayes Roth dataset. This dataset [59] contains 5 numeric-valued attributes and 132 patterns.
  • Heart dataset [60], used to detect heart disease.
  • HouseVotes dataset [61], which is about votes for U.S. House of Representatives Congressmen.
  • Ionosphere dataset. The ionosphere dataset contains data from the Johns Hopkins Ionosphere database and it has been studied in several papers [62,63].
  • Liverdisorder dataset [64], used for detecting liver disorders in people using blood analysis.
  • Mammographic dataset [65]. This dataset be used to identify the severity (benign or malignant) of a mammographic mass lesion from BI-RADS attributes and the patient’s age. It contains 830 patterns of 5 features each.
  • PageBlocks dataset [66], used to detect the page layout of a document.
  • Parkinsons dataset. This dataset is composed of a range of biomedical voice measurements from 31 people, 23 with Parkinson’s disease (PD) [67].
  • Pima dataset [68], used to detect the presence of diabetes.
  • Popfailures dataset [69], which is related to climate model simulation crashes of simulation crashes.
  • Regions2 dataset. It is created from liver biopsy images of patients with hepatitis C [70]. From each region in the acquired images, 18 shape-based and color-based features were extracted, while it was also annotated by medical experts. The resulting dataset includes 600 samples belonging to 6 classes.
  • Saheart dataset [71], used to detect heart disease.
  • Segment dataset [72]. This database contains patterns from a database of 7 outdoor images (classes).
  • Wdbc dataset [73], which contains data for breast tumors.
  • Wine dataset, used to detect through chemical analysis the origin of wines and has been used in various research papers [74,75].
  • Eeg datasets. As a real-world example, consider an EEG dataset described in [9] is used here. The dataset consists of five sets (denoted as Z, O, N, F and S) each containing 100 single-channel EEG segments each having 23.6 sec duration. With different combinations of these sets, the produced datasets are Z_F_S, ZO_NF_S and ZONF_S.
  • ZOO dataset [76], where the task is to classify animals in seven predefined classes.
In addition, the following regression datasets were used:
  • ABALONE dataset [77]. This dataset can be used to obtain a model to predict the age of abalone from physical measurements.
  • AIRFOIL dataset, which is used by NASA for a series of aerodynamic and acoustic tests [78].
  • BASEBALL dataset, a dataset to predict the salary of baseball players.
  • BK dataset. This dataset comes from smoothing methods in statistics [79] and is used to estimate the points scored per minute in a basketball game.
  • BL dataset: This dataset can be downloaded from StatLib. It contains data from an experiment on the effects of machine adjustments on the time to count bolts.
  • CONCRETE dataset. This dataset is taken from civil engineering [80].
  • DEE dataset, used to predict the daily average price of electricity energy in Spain.
  • DIABETES dataset, a medical dataset.
  • HOUSING dataset. This dataset was taken from the StatLib library which is maintained at Carnegie Mellon University and it is described in [81].
  • FA dataset, which contains percentage of body fat and ten body circumference measurements. The goal is to fit body fat to the other measurements.
  • MB dataset. This dataset is available from smoothing methods in statistics [79] and it includes 61 patterns.
  • MORTGAGE dataset, which contains the economic data information of the U.S.
  • PY dataset (pyrimidines problem). The source of this dataset is the URL (accessed on 23 May 2022) and it is a problem of 27 attributes and 74 patterns. The task consists of learning quantitative structure activity relationships (QSARs) and is provided by [82].
  • QUAKE dataset. The objective here is to approximate the strength of an earthquake.
  • TREASURY dataset, which contains economic data information of the U.S. from 1 April 1980 to 2 April 2000 on a weekly basis.
  • WANKARA dataset, which contains weather information.

3.2. Experimental Results

The method was compared against three other methods:
  • A genetic algorithm with the same parameters that are shown in Table 1. In addition, after the termination of the genetic algorithm, the local search procedure of BFGS was applied to the best chromosome of the population, in order to enhance the quality of the solution. The column GENETIC in the experimental tables denotes the results from the application of this method.
  • The Adam stochastic optimization method [83] as implemented in OptimLib, freely available from (accessed on 23 May 2022). The results for this method are listed in the column ADAM in the relevant tables.
  • The RPROP method [21] as implemented in the FCNN software package [84]. The results for this method are listed in the column RPROP in the relevant tables.
  • The NEAT method (neuroevolution of augmenting topologies) [85] as implemented in the EvolutionNet package which is freely available from (accessed on 23 May 2022). The maximum number of generations was the same as in the case of the genetic algorithm.
All the experiments were conducted 30 times with different seeds for the random number generator each time and averages were taken. To perform the experiments, the software IntervalGenetic is freely available from (accessed on 23 May 2022) was utilized. The experimental results for the classification datasets are shown in Table 2 and the results for the regression datasets are outlined in Table 3. For the classification problems, the average classification error on the test set is shown, and for regression datasets, the average mean squared error on the test set is displayed. In all cases, 10-fold cross validation was used and the number of hidden nodes (parameter H) was set to 10. The column DATASET stands for the name of the dataset incorporated, the column D = 50 represents the application of the proposed method with D = 50 as the initial value for the interval of weights, the column D = 100 stands for the results of the proposed method with D = 100 and finally the column D = 200 represents the results of the proposed method with D = 200 . In both tables, an additional row was added at the end showing the average classification or regression error for all datasets and it is denoted by the name AVERAGE. All the experiments were conducted on an AMD Ryzen 5950X equipped with 128 GB of RAM. The operating system used was OpenSUSE Linux and all the programs were compiled using the GNU C++ compiler.
As can be seen from the experimental results, the proposed method is significantly superior to the other methods, especially in the case of regression data. The RPROP training method seems to overcome ADAM in most cases of classification datasets and the simple genetic method is better than ADAM and RPROP for classification datasets but not for regression datasets. In addition, the change in the parameter D does not seem to have a significant effect on the performance of the algorithm and the proposed algorithm achieves high performance even for small values of this parameter.
In addition, the average execution times for all the problems of this publication were compared between the proposed method and the methods ADAM, RPROP, GENETIC and NEAT mentioned above. The average execution times are presented graphically in Figure 1. In order to speed up the proposed method, the genetic algorithm used was parallelized using the open source library OpenMP [49]. The column THREAD1 stands for the average time execution of the proposed method with one thread, the column THREADS 2 represents the average execution time of the proposed method using two threads in the OpenMP implementation, the column THREADS 4 denotes the average execution time of the proposed method for four threads and finally the column THREADS 8 denotes the average execution time for eight threads for the OpenMP implementation. The proposed method has slow execution times when performed on one thread, but as the number of threads used increases, the execution time decreases dramatically. This is very important, because it means that it could be used in large problems if the computer in use has enough execution threads. Obviously, all the methods of training artificial neural networks could be parallelized in one way or another. The parallelization of the proposed method was performed since it is by nature an extremely slow method, since it requires the use of two genetic algorithms in series. By using parallel techniques, this problem is alleviated; however, the computational cost remains high. However, this is the only substantial price for using this technique. In addition, a time comparison was made for the PageBlocks dataset between the proposed method and a parallel implementation of the Adam algorithm named DADAM for the number of threads ranging from 1 to 8. The time comparison is graphically illustrated in Figure 2.
To make the dynamics of the proposed method clearer, another series of experiments was performed. In these, the maximum number of generations (parameter N t ) received three values: 20, 40 and 100. For each value, all experiments for the classification and regression datasets were performed. The results for the classification datasets are listed in Table 4 and the results for the regression datasets are shown in Table 5. As expected, the proposed method improves its performance as the maximum number of generations increases, but even for a small number of generations it has a satisfactory performance.
In addition, to make a better and fairer comparison of the results, another set of experiments was performed with the genetic algorithm, in which the maximum number of generations was varied from 100 to 800, and the results are presented in Table 6 for the classification datasets and in Table 7 for the regression datasets. Observing these results, we can say that after 200 generations there is no significant difference in the efficiency of the genetic algorithm.

4. Conclusions

An innovative method of training artificial neural networks was presented in this paper. The method consists of two important phases: in the first phase, through a hybrid genetic algorithm, an attempt is made to identify the optimal interval of initialization and the training of the network parameters, and in the second phase, the training of the parameters in the optimal intervals of the first phase is performed using a genetic algorithm. The optimization of the optimal interval in the first phase is conducted by using partition rules for the initial interval which are applied in order. This technique aims to reduce the parameter search space and then significantly speed up network configuration training.
The proposed method was tested on a series of classification and regression datasets from the relevant literature and the experimental results seem to be very promising compared to the genetic algorithm procedure. However, since the method consists of two computational phases, it is much slower than other training techniques for artificial neural networks, and therefore, the use of parallel processing techniques is considered necessary.
Future improvements to the proposed method may include the incorporation of additional global optimization techniques instead of genetic algorithms, the usage of more advanced stopping rules and the application of the method to other types of neural networks such as radial basis function networks (RBF).

Author Contributions

I.G.T., A.T. and E.K. conceived the idea and methodology and supervised the technical part regarding the software. I.G.T. conducted the experiments, employing several datasets, and provided the comparative experiments. A.T. performed the statistical analysis. E.K. and all other authors prepared the manuscript. E.K. and I.G.T. organized the research team and A.T. supervised the project. All authors have read and agreed to the published version of the manuscript.


This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.


The experiments of this research work were performed using the high-performance computing system established at Knowledge and Intelligent Computing Laboratory, Dept. of Informatics and Telecommunications, University of Ioannina, acquired with the project “Educational Laboratory equipment of TEI of Epirus” with MIS 5007094 funded by the Operational Programme “Epirus”, 2014–2020, by ERDF and national funds.

Conflicts of Interest

The authors declare no conflict of interest.


  1. Bishop, C. Neural Networks for Pattern Recognition; Oxford University Press: Oxford, UK, 1995. [Google Scholar]
  2. Cybenko, G. Approximation by superpositions of a sigmoidal function. Math. Control Signals Syst. 1989, 2, 303–314. [Google Scholar] [CrossRef]
  3. Baldi, P.; Cranmer, K.; Faucett, T.; Sadowski, P.; Whiteson, D. Parameterized neural networks for high-energy physics. Eur. Phys. J. C 2016, 76, 235. [Google Scholar] [CrossRef]
  4. Valdas, J.J.; Bonham-Carter, G. Time dependent neural network models for detecting changes of state in complex processes: Applications in earth sciences and astronomy. Neural Netw. 2006, 19, 196–207. [Google Scholar] [CrossRef]
  5. Carleo, G.; Troyer, M. Solving the quantum many-body problem with artificial neural networks. Science 2017, 355, 602–606. [Google Scholar] [CrossRef]
  6. Shirvany, Y.; Hayati, M.; Moradian, R. Multilayer perceptron neural networks with novel unsupervised training method for numerical solution of the partial differential equations. Appl. Soft Comput. 2009, 9, 20–29. [Google Scholar] [CrossRef]
  7. Malek, A.; Beidokhti, R.S. Numerical solution for high order differential equations using a hybrid neural network—Optimization method. Appl. Math. Comput. 2006, 183, 260–271. [Google Scholar] [CrossRef]
  8. Topuz, A. Predicting moisture content of agricultural products using artificial neural networks. Adv. Eng. 2010, 41, 464–470. [Google Scholar] [CrossRef]
  9. Escamilla-García, A.; Soto-Zarazúa, G.M.; Toledano-Ayala, M.; Rivas-Araiza, E.; Gastélum-Barrios, A. Applications of Artificial Neural Networks in Greenhouse Technology and Overview for Smart Agriculture Development. Appl. Sci. 2020, 10, 3835. [Google Scholar] [CrossRef]
  10. Shen, L.; Wu, J.; Yang, W. Multiscale Quantum Mechanics/Molecular Mechanics Simulations with Neural Networks. J. Chem. Theory Comput. 2016, 12, 4934–4946. [Google Scholar] [CrossRef]
  11. Manzhos, S.; Dawes, R.; Carrington, T. Neural network-based approaches for building high dimensional and quantum dynamics-friendly potential energy surfaces. Int. J. Quantum Chem. 2015, 115, 1012–1020. [Google Scholar] [CrossRef]
  12. Wei, J.N.; Duvenaud, D.; Aspuru-Guzik, A. Neural Networks for the Prediction of Organic Chemistry Reactions. ACS Cent. Sci. 2016, 2, 725–732. [Google Scholar] [CrossRef]
  13. Falat, L.; Pancikova, L. Quantitative Modelling in Economics with Advanced Artificial Neural Networks. Procedia Econ. Financ. 2015, 34, 194–201. [Google Scholar] [CrossRef]
  14. Namazi, M.; Shokrolahi, A.; Maharluie, M.S. Detecting and ranking cash flow risk factors via artificial neural networks technique. J. Bus. Res. 2016, 69, 1801–1806. [Google Scholar] [CrossRef]
  15. Tkacz, G. Neural network forecasting of Canadian GDP growth. Int. J. Forecast. 2001, 17, 57–69. [Google Scholar] [CrossRef]
  16. Baskin, I.I.; Winkler, D.; Tetko, I.V. A renaissance of neural networks in drug discovery. Expert Opin. Drug Discov. 2016, 11, 785–795. [Google Scholar] [CrossRef]
  17. Bartzatt, R. Prediction of Novel Anti-Ebola Virus Compounds Utilizing Artificial Neural Network (ANN). Chem. Fac. 2018, 49, 16–34. [Google Scholar]
  18. Tsoulos, I.G.; Gavrilis, D.; Glavas, E. Neural network construction and training using grammatical evolution. Neurocomputing 2008, 72, 269–277. [Google Scholar] [CrossRef]
  19. Rumelhart, D.E.; Hinton, G.E.; Williams, R.J. Learning representations by back-propagating errors. Nature 1986, 323, 533–536. [Google Scholar] [CrossRef]
  20. Chen, T.; Zhong, S. Privacy-Preserving Backpropagation Neural Network Learning. IEEE Trans. Neural Netw. 2009, 20, 1554–1564. [Google Scholar] [CrossRef]
  21. Riedmiller, M.; Braun, H. A Direct Adaptive Method for Faster Backpropagation Learning: The RPROP algorithm. In Proceedings of the IEEE International Conference on Neural Networks, San Francisco, CA, USA, 28 March–1 April 1993; pp. 586–591. [Google Scholar]
  22. Pajchrowski, T.; Zawirski, K.; Nowopolski, K. Neural Speed Controller Trained Online by Means of Modified RPROP Algorithm. IEEE Trans. Ind. Inform. 2015, 11, 560–568. [Google Scholar] [CrossRef]
  23. Hermanto, R.P.; Nugroho, A. Waiting-Time Estimation in Bank Customer Queues using RPROP Neural Networks. Procedia Comput. Sci. 2018, 135, 35–42. [Google Scholar] [CrossRef]
  24. Robitaille, B.; Marcos, B.; Veillette, M.; Payre, G. Modified quasi-Newton methods for training neural networks. Comput. Chem. Eng. 1996, 20, 1133–1140. [Google Scholar] [CrossRef]
  25. Liu, Q.; Liu, J.; Sang, R.; Li, J.; Zhang, T.; Zhang, Q. Fast Neural Network Training on FPGA Using Quasi-Newton Optimization Method. IEEE Trans. Very Large Scale Integr. (VLSI) Syst. 2018, 26, 1575–1579. [Google Scholar] [CrossRef]
  26. Yamazaki, A.; de Souto, M.C.P.; Ludermir, T.B. Optimization of neural network weights and architectures for odor recognition using simulated annealing. In Proceedings of the 2002 International Joint Conference on Neural Networks (IJCNN’02), Honolulu, HI, USA, 12–17 May 2002; Volume 1, pp. 547–552. [Google Scholar]
  27. Da, Y.; Xiurun, G. An improved PSO-based ANN with simulated annealing technique. Neurocomputing 2005, 63, 527–533. [Google Scholar] [CrossRef]
  28. Leung, F.H.F.; Lam, H.K.; Ling, S.H.; Tam, P.K. Tuning of the structure and parameters of a neural network using an improved genetic algorithm. IEEE Trans. Neural Netw. 2003, 14, 79–88. [Google Scholar] [CrossRef]
  29. Yao, X. Evolving artificial neural networks. Proc. IEEE 1999, 87, 1423–1447. [Google Scholar]
  30. Zhang, C.; Shao, H.; Li, Y. Particle swarm optimisation for evolving artificial neural network. In Proceedings of the IEEE International Conference on Systems, Man, and Cybernetics, Nashville, TN, USA, 8–11 October 2000; pp. 2487–2490. [Google Scholar]
  31. Yu, J.; Wang, S.; Xi, L. Evolving artificial neural networks using an improved PSO and DPSO. Neurocomputing 2008, 71, 1054–1060. [Google Scholar] [CrossRef]
  32. Ivanova, I.; Kubat, M. Initialization of neural networks by means of decision trees. Knowl.-Based Syst. 1995, 8, 333–344. [Google Scholar] [CrossRef]
  33. Yam, J.Y.F.; Chow, T.W.S. A weight initialization method for improving training speed in feedforward neural network. Neurocomputing 2000, 30, 219–232. [Google Scholar] [CrossRef]
  34. Chumachenko, K.; Iosifidis, A.; Gabbouj, M. Feedforward neural networks initialization based on discriminant learning. Neural Netw. 2022, 146, 220–229. [Google Scholar] [CrossRef]
  35. Shahjahan, M.D.; Kazuyuki, M. Neural network training algorithm with possitive correlation. IEEE Trans. Inf. Syst. 2005, 88, 2399–2409. [Google Scholar] [CrossRef]
  36. Treadgold, N.K.; Gedeon, T.D. Simulated annealing and weight decay in adaptive learning: The SARPROP algorithm. IEEE Trans. Neural Netw. 1998, 9, 662–668. [Google Scholar] [CrossRef] [PubMed]
  37. Leung, C.S.; Wong, K.W.; Sum, P.F.; Chan, L.W. A pruning method for the recursive least squared algorithm. Neural Netw. 2001, 14, 147–174. [Google Scholar] [CrossRef]
  38. Lonen, J.; Kamarainen, J.K.; Lampinen, J. Differential Evolution Training Algorithm for Feed-Forward Neural Networks. Neural Processing Lett. 2003, 17, 93–105. [Google Scholar]
  39. Baioletti, M.; Bari, G.D.; Milani, A.; Poggioni, V. Differential Evolution for Neural Networks Optimization. Mathematics 2020, 8, 69. [Google Scholar] [CrossRef]
  40. Salama, K.M.; Abdelbar, A.M. Learning neural network structures with ant colony algorithms. Swarm Intell. 2015, 9, 229–265. [Google Scholar] [CrossRef]
  41. Tsoulos, I.G.; Gavrilis, D.; Glavas, E. Solving differential equations with constructed neural networks. Neurocomputing 2009, 72, 2385–2391. [Google Scholar] [CrossRef]
  42. Martínez-Zarzuela, M.; Díaz Pernas, F.J.; Díez Higuera, J.F.; Rodríguez, M.A. Fuzzy ART Neural Network Parallel Computing on the GPU. In Computational and Ambient Intelligence; Sandoval, F., Prieto, A., Cabestany, J., Graña, M., Eds.; IWANN 2007; Lecture Notes in Computer Science; Springer: Berlin/Heidelberg, Germany, 2007; Volume 4507. [Google Scholar]
  43. Huqqani, A.A.; Schikuta, E.; Chen, S.Y.P. Multicore and GPU Parallelization of Neural Networks for Face Recognition. Procedia Comput. Sci. 2013, 18, 349–358. [Google Scholar] [CrossRef]
  44. Hansen, E.; Walster, G.W. Global Optimization Using Interval Analysis; Marcel Dekker Inc.: New York, NY, USA, 2004. [Google Scholar]
  45. Markót, M.C.; Fernández, J.; Casado, L.G.; Csendes, T. New interval methods for constrained global optimization. Mathematics 2006, 106, 287–318. [Google Scholar] [CrossRef]
  46. Žilinskas, A.; Žilinskas, J. Interval Arithmetic Based Optimization in Nonlinear Regression. Informatica 2010, 21, 149–158. [Google Scholar] [CrossRef]
  47. Rodriguez, P.; Wiles, J.; Elman, J.L. A Recurrent Neural Network that Learns to Count. Connect. Sci. 1999, 11, 5–40. [Google Scholar] [CrossRef]
  48. Chandra, R.; Zhang, M. Cooperative coevolution of Elman recurrent neural networks for chaotic time series prediction. Neurocomputing 2012, 86, 116–123. [Google Scholar] [CrossRef]
  49. Dagum, L.; Menon, R. OpenMP: An industry standard API for shared-memory programming. IEEE Comput. Sci. Eng. 1998, 5, 46–55. [Google Scholar] [CrossRef]
  50. Kaelo, P.; Ali, M.M. Integrated crossover rules in real coded genetic algorithms. Eur. J. Oper. Res. 2007, 176, 60–76. [Google Scholar] [CrossRef]
  51. Powell, M.J.D. A Tolerant Algorithm for Linearly Constrained Optimization Calculations. Math. Program. 1989, 45, 547–566. [Google Scholar] [CrossRef]
  52. Alcalá-Fdez, J.; Fernández, A.; Luengo, J.; Derrac, J.; García, S.; Sánchez, L.; Herrera, F. KEEL Data-Mining Software Tool: Data Set Repository, Integration of Algorithms and Experimental Analysis Framework. J. -Mult.-Valued Log. Soft Comput. 2011, 17, 255–287. [Google Scholar]
  53. Weiss, S.M.; Kulikowski, C.A. Computer Systems That Learn: Classification and Prediction Methods from Statistics, Neural Nets, Machine Learning, and Expert Systems; Morgan Kaufmann Publishers Inc.: Burlington, MA, USA, 1991. [Google Scholar]
  54. Quinlan, J.R. Simplifying Decision Trees. Int. Man Mach. Stud. 1987, 27, 221–234. [Google Scholar] [CrossRef]
  55. Shultz, T.; Mareschal, D.; Schmidt, W. Modeling Cognitive Development on Balance Scale Phenomena. Mach. Learn. 1994, 16, 59–88. [Google Scholar] [CrossRef]
  56. Zhou, Z.H.; Jiang, Y. NeC4.5: Neural ensemble based C4.5. IEEE Trans. Knowl. Data Eng. 2004, 16, 770–773. [Google Scholar] [CrossRef]
  57. Setiono, R.; Leow, W.K. FERNN: An Algorithm for Fast Extraction of Rules from Neural Networks. Appl. Intell. 2000, 12, 15–25. [Google Scholar] [CrossRef]
  58. Demiroz, G.; Govenir, H.A.; Ilter, N. Learning Differential Diagnosis of Eryhemato-Squamous Diseases using Voting Feature Intervals. Artif. Intell. Med. 1998, 13, 147–165. [Google Scholar]
  59. Hayes-Roth, B.; Hayes-Roth, B.F. Concept learning and the recognition and classification of exemplars. J. Verbal Learning Verbal Behav. 1977, 16, 321–338. [Google Scholar] [CrossRef]
  60. Kononenko, I.; Šimec, E.; Robnik-Šikonja, M. Overcoming the Myopia of Inductive Learning Algorithms with RELIEFF. Appl. Intell. 1997, 7, 39–55. [Google Scholar] [CrossRef]
  61. French, R.M.; Chater, N. Using noise to compute error surfaces in connectionist networks: A novel means of reducing catastrophic forgetting. Neural Comput. 2002, 14, 1755–1769. [Google Scholar] [CrossRef] [PubMed]
  62. Dy, J.G.; Brodley, C.E. Feature Selection for Unsupervised Learning. J. Mach. Learn. Res. 2004, 5, 845–889. [Google Scholar]
  63. Perantonis, S.J.; Virvilis, V. Input Feature Extraction for Multilayered Perceptrons Using Supervised Principal Component Analysis. Neural Process. Lett. 1999, 10, 243–252. [Google Scholar] [CrossRef]
  64. Garcke, J.; Griebel, M. Classification with sparse grids using simplicial basis functions. Intell. Data Anal. 2002, 6, 483–502. [Google Scholar] [CrossRef]
  65. Elter, M.; Schulz-Wendtland, R.; Wittenberg, T. The prediction of breast cancer biopsy outcomes using two CAD approaches that both emphasize an intelligible decision process. Med. Phys. 2007, 34, 4164–4172. [Google Scholar] [CrossRef]
  66. Malerba, F.E.F.D.; Semeraro, G. Multistrategy Learning for Document Recognition. Appl. Artif. Intell. 1994, 8, 33–84. [Google Scholar]
  67. Little, M.A.; McSharry, P.E.; Hunter, E.J.; Spielman, J.; Ramig, L.O. Suitability of dysphonia measurements for telemonitoring of Parkinson’s disease. IEEE Trans. Biomed. Eng. 2009, 56, 1015–1022. [Google Scholar] [CrossRef]
  68. Smith, J.W.; Everhart, J.E.; Dickson, W.C.; Knowler, W.C.; Johannes, R.S. Using the ADAP learning algorithm to forecast the onset of diabetes mellitus. In Proceedings of the Symposium on Computer Applications and Medical Care, Minneapolis, MN, USA, 8–10 June 1988; pp. 261–265. [Google Scholar]
  69. Lucas, D.D.; Klein, R.; Tannahill, J.; Ivanova, D.; Brandon, S.; Domyancic, D.; Zhang, Y. Failure analysis of parameter-induced simulation crashes in climate models. Geosci. Model Dev. 2013, 6, 1157–1171. [Google Scholar] [CrossRef]
  70. Giannakeas, N.; Tsipouras, M.G.; Tzallas, A.T.; Kyriakidi, K.; Tsianou, Z.E.; Manousou, P.; Hall, A.; Karvounis, E.C.; Tsianos, V.; Tsianos, E. A clustering based method for collagen proportional area extraction in liver biopsy images. In Proceedings of the Annual International Conference of the IEEE Engineering in Medicine and Biology Society, EMBS, Milan, Italy, 25–29 August 2015; pp. 3097–3100. [Google Scholar]
  71. Hastie, T.; Tibshirani, R. Non-parametric logistic and proportional odds regression. JRSS-C Appl. Stat. 1987, 36, 260–276. [Google Scholar] [CrossRef]
  72. Dash, M.; Liu, H.; Scheuermann, P.; Tan, K.L. Fast hierarchical clustering and its validation. Data Knowl. Eng. 2003, 44, 109–138. [Google Scholar] [CrossRef]
  73. Wolberg, W.H.; Mangasarian, O.L. Multisurface method of pattern separation for medical diagnosis applied to breast cytology. Proc. Natl. Acad. Sci. USA 1990, 87, 9193–9196. [Google Scholar] [CrossRef]
  74. Raymer, M.; Doom, T.E.; Kuhn, L.A.; Punch, W.F. Knowledge discovery in medical and biological datasets using a hybrid Bayes classifier/evolutionary algorithm. IEEE Trans. Syst. Man Cybern. Part B Cybern. 2003, 33, 802–813. [Google Scholar] [CrossRef]
  75. Zhong, P.; Fukushima, M. Regularized nonsmooth Newton method for multi-class support vector machines. Optim. Methods Softw. 2007, 22, 225–236. [Google Scholar] [CrossRef]
  76. Koivisto, M.; Sood, K. Exact Bayesian Structure Discovery in Bayesian Networks. J. Mach. Learn. Res. 2004, 5, 549–573. [Google Scholar]
  77. Nash, W.J.; Sellers, T.L.; Talbot, S.R.; Cawthor, A.J.; Ford, W.B. The Population Biology of Abalone (_Haliotis_ Species) in Tasmania. I. Blacklip Abalone (_H. rubra_) from the North Coast and Islands of Bass Strait; Report No. 48; Sea Fisheries Division, Department of Primary Industry and Fisheries: Taroona, Australia, 1994. [Google Scholar]
  78. Brooks, T.F.; Pope, D.S.; Marcolini, A.M. Airfoil Self-Noise and Prediction; Technical Report, NASA RP-1218; National Aeronautics and Space Administration: Washington, DC, USA, 1989.
  79. Simonoff, J.S. Smooting Methods in Statistics; Springer: Berlin/Heidelberg, Germany, 1996. [Google Scholar]
  80. Yeh, I.C. Modeling of strength of high performance concrete using artificial neural networks. Cem. Concr. Res. 1998, 28, 1797–1808. [Google Scholar] [CrossRef]
  81. Harrison, D.; Rubinfeld, D.L. Hedonic prices and the demand for clean ai. J. Environ. Econ. Manag. 1978, 5, 81–102. [Google Scholar] [CrossRef]
  82. King, R.D.; Muggleton, S.; Lewis, R.; Sternberg, M.J.E. Drug design by machine learning: The use of inductive logic programming to model the structure-activity relationships of trimethoprim analogues binding to dihydrofolate reductase. Proc. Nat. Acad. Sci. USA 1992, 89, 11322–11326. [Google Scholar] [CrossRef]
  83. Kingma, D.P.; Ba, J.L. ADAM: A method for stochastic optimization. In Proceedings of the 3rd International Conference on Learning Representations (ICLR 2015), San Diego, CA, USA, 7–9 May 2015; pp. 1–15. [Google Scholar]
  84. Klima, G. Fast Compressed Neural Networks. Available online: (accessed on 23 May 2022).
  85. Stanley, K.O.; Miikkulainen, R. Evolving Neural Networks through Augmenting Topologies. Evol. Comput. 2002, 10, 99–127. [Google Scholar] [CrossRef] [PubMed]
Figure 1. Execution time comparison between the proposed algorithm and the other mentioned methods.
Figure 1. Execution time comparison between the proposed algorithm and the other mentioned methods.
Knowledge 02 00024 g001
Figure 2. Time comparison between the proposed method and a parallel implementation of Adam algorithm. The comparison is made for the dataset PageBlocks.
Figure 2. Time comparison between the proposed method and a parallel implementation of Adam algorithm. The comparison is made for the dataset PageBlocks.
Knowledge 02 00024 g002
Table 1. Experimental parameters.
Table 1. Experimental parameters.
N C 200
N S 50
N t 200
P s 0.10
P m 0.01
Table 2. Experiments for classification datasets.
Table 2. Experiments for classification datasets.
Hayes Roth56.18%59.70%37.46%50.15%28.72%28.84%32.05%
Table 3. Experiments for regression datasets.
Table 3. Experiments for regression datasets.
Table 4. Experiments with N t for the classification datasets.
Table 4. Experiments with N t for the classification datasets.
DATASET N t = 20 N t = 40 N t = 100
Hayes Roth50.33%38.56%36.80%
Table 5. Experiments with different values of N t parameter for the regression datasets.
Table 5. Experiments with different values of N t parameter for the regression datasets.
DATASET N t = 20 N t = 40 N t = 100
Table 6. Experiments with the genetic method and various values of N t for the classification datasets.
Table 6. Experiments with the genetic method and various values of N t for the classification datasets.
DATASET N t = 100 N t = 200 N t = 400 N t = 800
Hayes Roth58.44%56.18%57.21%55.51%
Table 7. Experiments with the genetic method and various values of N t for the regression datasets.
Table 7. Experiments with the genetic method and various values of N t for the regression datasets.
DATASET N t = 100 N t = 200 N t = 400 N t = 800
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Tsoulos, I.G.; Tzallas, A.; Karvounis, E. A Rule-Based Method to Locate the Bounds of Neural Networks. Knowledge 2022, 2, 412-428.

AMA Style

Tsoulos IG, Tzallas A, Karvounis E. A Rule-Based Method to Locate the Bounds of Neural Networks. Knowledge. 2022; 2(3):412-428.

Chicago/Turabian Style

Tsoulos, Ioannis G., Alexandros Tzallas, and Evangelos Karvounis. 2022. "A Rule-Based Method to Locate the Bounds of Neural Networks" Knowledge 2, no. 3: 412-428.

Article Metrics

Back to TopTop