Next Article in Journal
Unsupervised Hashing with Gradient Attention
Previous Article in Journal
Security and Privacy in IoT-Cloud-Based e-Health Systems—A Comprehensive Review
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Testing of a Virtualized Distributed Processing System for the Execution of Bio-Inspired Optimization Algorithms

1
Facultad de Ingeniería, Universidad Distrital Francisco José de Caldas, Bogota 11021-110231588, Colombia
2
Facultad de Ingeniería, Universidad Escuela Colombiana de Carreras Industriales, Bogota 111311, Colombia
*
Author to whom correspondence should be addressed.
Symmetry 2020, 12(7), 1192; https://doi.org/10.3390/sym12071192
Submission received: 26 May 2020 / Revised: 7 July 2020 / Accepted: 13 July 2020 / Published: 17 July 2020

Abstract

:
Due to the stochastic characteristics of bio-inspired optimization algorithms, several executions are often required; then a suitable infrastructure must be available to run these algorithms. This paper reviews a virtualized distributed processing scheme to establish an adequate infrastructure for the execution of bio-inspired algorithms. In order to test the virtualized distributed system, the well known versions of genetic algorithms, differential evolution and particle swarm optimization, are used. The results show that the revised distributed virtualized schema allows speeding up the execution of the algorithms without altering their result in the objective function.

1. Introduction

Artificial intelligence and processing technologies are important tools to improve the analysis of the different phenomena in nature; it is important to have an efficient infrastructure for data processing and analysis, which must be platform-independent [1]. In relation to one of the techniques used in artificial intelligence, bio-inspired optimization algorithms have proven to be a suitable tool for troubleshooting engineering; however, because of their stochastic behavior they often require a large number of iterations such as a number of executions to get a useful solution. Therefore, it is necessary to establish an adequate infrastructure to run such algorithms—a virtualized distributed system being a suitable option.

1.1. Bio-Inspired Optimization

Bio-inspired optimization techniques (heuristics) are a suitable alternative when traditional methods cannot determine appropriate results or have limitations. In the field of bio-inspired optimization, there are different proposals based on behaviors and phenomena that exist in nature [2]. In this regard, approaches of individuals among the algorithms are stochastic hill climbing (SHC) and simulated annealing (SA). Among the methods based on several individuals (populations), the genetic algorithms (GA), differential evolution (DE), ant colony optimization (ACO), bacterial chemotaxis (BCO), and particle swarm optimization (PSO). The stochastic hill climbing algorithm is based on the stochastic selection of neighboring solutions, which are accepted in cases wherein there is an improvement in the target function [3]. Simulated annealing is a method that emulates the crystalline formation of a material by heating and cooling it, seeking to move from a higher to a lower energy state [3]. Genetic algorithms and differential evolution seek to emulate the process of nature by improving a species over time [3]. On the other hand, algorithms based on ant colonies, bacterial chemotaxis, and swarms of particles are inspired by the behavior of living beings searching for food [4,5,6].
In order to test the processing system, we used the standard versions of genetic algorithms, differential evolution, and particle swarm optimization, which are widely known.

1.2. Distributed Processing Systems

Aggregate computing is an emerging approach to complex coordination engineering that results in distributed systems. This approach is based on the interactions of the visualization system in terms of information that is propagated across device groups and their interactions with their peers and environment [7]. Some applications that involve data analysis typically perform the calculations in a data center or cloud environment. As these applications grow in scale, this centralized approach leads to bandwidth requirements and potentially impractical computational latencies. This has generated interest in computing wherein processing is in a distributed manner [8].
According to [9], the development of this technology shows the importance of teaching the aspects of parallel computing. In this regard, the authors of [9] present a model for incorporating parallel and distributed computing (PDC) throughout an undergraduate computer science (CS) curriculum, presenting students with computer topics of distributed and parallel computation in the intermediate level.
Parallel and distributed computing is of importance when handling a large amount of data and the ability to process it; some applications of parallel computing are described below.
An application on renewable energy generation can be seen in [10], where a distributed data processing system is presented to improve the estimation of urban solar potential by directly using a dense set of scanning points obtained from aerial laser scans (ALS) that allow incorporating true, complex, and heterogeneous elements common in most urban areas.
Another application on renewable energy can be seen in [11], where a hybrid distributed computing system is used in Apache Spark for wind speed forecasting, which corresponds to an arduous task given the randomness of the wind speed. Using the distributed computing strategy, the system can divide large wind speed datasets into groups and use them in parallel.
In relation to a geomatics application, in [1] a distributed computer framework is provided, allowing data collection and processing. According to the authors, the proposed system can support efficient range queries and large-scale spatial data processing in a Spark cluster and another in Flink, providing an effective cross-platform distributed computing solution for fast processing of large-scale spatial data.
Finally, an application in the field of agriculture can be seen in [12], where it is necessary to evaluate the spatial distributions of crop yields under current and future climatic conditions. This task generally requires considerable effort in order to prepare the input data and post-process the results, which is why the authors developed a simulation support system to automate repetitive and tedious tasks using virtual machines connected over a local network, allowing for a clustered computer without workstations having to be dedicated.

1.3. Virtualization Systems

The concept of virtualization is applied in cloud computing systems to help users and owners achieve better use and efficient management of the cloud at the lowest cost [13]. Live migration of virtual machines (VM) is an essential feature of virtualization, allowing one to migrate virtual machines from one location to another without suspending them. This process has many advantages for data centers, such as load balancing, inline maintenance, power management, and proactive fault tolerance [13]. When a system such as a processor, memory, or I/O device is virtualized, its interface and all visible resources are mapped to a virtual interface in such a way that the actual system is transformed into a different virtual or even a set of multiple virtual systems [14]. Virtualization technologies allow decoupling the architecture and user-perceived behavior of hardware and software resources from the physical deployment [15].
According to [16], virtualization has made it possible to completely isolate virtual machines from each other. When applications running inside virtual machines have real-time restrictions, threads that deploy virtual cores must be programmed in a predictable way over physical cores. Meanwhile, reference [17] states that real-time virtual machines are suitable for tightly coupled computer systems wherein tasks are executed from the associated language. Here is an approach to support the transfer of tasks between freely attached computers in a real-time environment to add more features without updating the software.
Essentially, energy consumption is an important aspect of virtualization; in line with [18], the high power consumption of cloud data centers presents a significant challenge from both an economic and environmental perspective. Server consolidation using virtualization technology is widely used to reduce the power consumption rates of data centers. The efficient virtual machine placement (VMP) of virtual machines plays an important role in server consolidation technology, this being a difficult problem of type NP (nondeterministic polynomial time) for which the optimal solutions are not possible. In addition, reference [19] states that the assignment of a virtual to a physical machine affects the power consumption, manufacturing, and downtime of physical machines. According to [20], the demand for power for cloud data centers has increased markedly; therefore, the consolidation of dynamic virtual machines, as one of the effective methods to reduce energy consumption, is widely used in large data centers in the cloud.
On energy-related work, a hybrid VMP algorithm based on an improved genetic algorithm using permutation and a multidimensional resource allocation strategy is proposed in [18]. The proposed VMP algorithm aims to improve the rate of high power consumption of cloud data centers by minimizing the number of active servers that host virtual machines; it also seeks to achieve balanced use of resources (CPU, RAM, and bandwidth) of active servers, which in turn reduces wasted resources. Meanwhile, in [19], the problem is formulated as an optimization of packaging to minimize the energy costs of operating machines and inactive machines. When considering the CPU and memory requirements of a virtual machine, the allocation is limited by the capabilities of the physical machine. Another work can be seen in [20], where, in order to efficiently ensure quality of service (QoS), a VM approach is proposed that considers the current and future uses of resources through host overload detection.

1.4. Document Organization

Distributed processing and virtualization technologies are suitable tools when computing power is needed, as is the case with bio-inspired optimization algorithms, as their stochastic characteristics require running several times with different settings of their parameters. Therefore, this paper reviews a virtualized parallel processing scheme for the execution of bio-inspired optimization algorithms. The document is organized as follows. The first part reviews concepts on distributed computer systems and virtualization, and also presents the computer system used; then it describes the optimization of bio-inspired algorithms and the test functions considered, and subsequently presents the statistical results obtained, showing the configurations of the algorithms; finally, the conclusions of the work are established.
The objective of this work was to evaluate the virtualized distributed processing system located at the High Performance Computing Center (Centro de Computación de Alto Desempeño—CECAD) of the Universidad Distrital Francisco José de Caldas (UDFJC) which provides a distributed computing service in a virtualized way to the researchers of the UDFJC. The aim was to look at the characteristics of this system for the execution of bio-inspired optimization algorithms and the advantages that it has in relation to processing time.

2. Distributed Processing Systems

According to [21], a distributed system is a collection of autonomous computer elements (nodes) visible to users as a single consistent system. Each of the computer elements can behave independently and can be hardware devices or a software process. In this way, users (people or applications) think they are dealing with a single system; this implies that collaboration between nodes must be presented, which is an important aspect in the development of distributed systems [21]. A distributed system must have the following features to provide maximum performance to users:
Openness: This attribute ensures that a subsystem is continuously open to interaction with other systems, such as those designed to perform inter-machine interactions over a network by allowing distributed systems to expand and scale.
Scalable: A distributed system can function properly even if some aspect of the system scales to a larger size. Three components should be considered: the number of users and other entities that are part of the system, the distance between the farthest nodes of the system, and the number of organizations that exercise administrative control over parts of the system.
Predictable performance: Predictable performance is the ability to provide the desired responsiveness in a timely manner, according to a performance metric that may be the response time associated with the time elapsed between a query in a computer system and response. Another metric corresponds to the rate at which a network sends or receives data. Metrics associated with system utilization and network capacity can also be used to establish the performance.
Security: Security features are primarily intended to provide confidentiality, integrity, and availability; thus, distributed systems must allow communication between programs, users, and resources on different computers by applying the necessary security tools.
Fault-tolerant: Distributed systems consist of a large number of hardware and software modules that can fail in the long-term. Such component failures can result in a lack of service. Therefore, systems should be able to recover from component failures without performing erroneous actions.
Transparency: Distributed systems should be perceived by users and application developers as a whole and not as a collection of cooperating components. In this way, for the user the locations of the computer systems involved in the operations, data replication, failures, system recovery, etc., are not visible.

Types of Parallel Architecture

On the classification of parallel architectures, Michael Flynn proposed a taxonomy that simplified the categorization of different classes of architectures and control methods based on the relationships of data and instruction (control) with respect to the parallelism of the data flow [22,23].
There are different ways CPUs can be connected together; Flynn’s classification considers machines by the number of instruction flows and the number of data flows. Multiple instruction sequences mean that different statements can be executed simultaneously [22,23]. Data flows refer to memory operations whose four combinations are:
  • SISD (single instruction stream, single data stream): This classification corresponds to the traditional single-processor computer. It represents the conventional sequential (serial) processor structure where a single control thread, the flow of instructions, guides the sequence of operations performed on a single data set, one operating at a time.
  • SIMD (single instruction stream, multiple data streams): This architecture supports multiple streams of data to be processed simultaneously by replicating computer hardware. Single statement means that all data streams are processed using the same calculation logic. It can be seen as an array processor, where a single instruction operates in many data units in parallel.
  • MISD (multiple instruction stream, single data stream): Corresponds to a rare architecture, which operates in a single data flow but has multiple computing engines that use the same data flow. That is multiple processors, each with their own flow of instructions, working on the same data with which all the other processors operate. They could be used to provide fault tolerance with heterogeneous systems operating with the same data.
  • MIMD (multiple instruction stream, multiple data stream): This is the most generic parallel processing architecture where any type of distributed application is programmed. Multiple stand-alone processors running in parallel work in separate data flows. The logic of the applications running on these processors can also be very different. All distributed systems are recognized as MIMD architectures. At any time, a lot of operations are performed, but they do not have to be the same and are mostly different.
Shared memory and distributed memory systems are two main types of MIMD. In a shared memory system (Figure 1), a collection of stand-alone processors is connected to a memory system over an interconnected network, and each processor can access each memory location. In a shared memory system, processors are usually implicitly communicated by accessing shared data structures.
In a distributed memory system (Figure 2), each processor is coupled with its own private memory, and processor-memory pairs communicate over an interconnected network. On distributed memory systems, processors generally communicate explicitly by sending messages or using special functions that provide access to the memory of another processor [24].

3. Description of Virtualization Systems and Process

The strengthening of the cloud computing model has oriented bets on technological resources towards virtualization, and today this methodology has positioned itself as a computational requirement that all kinds of organizations demand for operation, due to the ability to access the information at all times and with the advantage that it allows them to significantly reduce the operating costs in terms of software and hardware resources.
When a cloud computing model becomes extensible, the use of virtualization is increasingly necessary; it is also essential to create new design and integration standards that regulate it. “Virtual infrastructure solutions are ideal for part-to-production environments because they run on industry-standard servers and desktops and are compatible with a wide range of operating systems and application environments, as well as infrastructure and storage” [25].
In the field of computer applications, the infrastructure corresponds to the set of elements that are necessary for the development of an activity [25,26]. In general, two types of infrastructure are identified: hardware (physical) infrastructure and software (logical) infrastructure. The first consists of elements as diverse as air conditioners, sensors, cameras, servers, routers, firewalls, laptops, printers, phones, etc.
The set of logical or software elements ranges from operating systems (Linux, Windows, etc.) to general applications that enable the operation of other specific computer systems of services, such as databases, application servers, or office tools for the suite of applications and computer tools used in the office to optimize, automate, and improve related procedures or tasks.
On the other hand, the term virtualization can be understood as creating through software a virtual version of some technological resource such as a hardware platform, an operating system, a storage device, or another resource network [27].
Nowadays, the consolidation of the cloud computing model has steered towards virtualization as a daily requirement within the technological resources that all kinds of companies require for their operation, permanently accessing and reducing their capital expenditures. The advantages of this model are summarized in three factors: economy, flexibility, and security. A cloud solution can add or remove workstations and servers, and modify their capabilities or configurations almost immediately [28].
A virtualized system includes a new software layer, namely, a virtual machine manager (VMM). The primary function of VMMs is to arbitrate access to resources on the underlying physical host platform so that multiple operating systems (which are VMM guests) can share them. VMM presents each host OS (operating system) with a set of virtual platform interfaces that constitute a virtual machine. Despite once being confined to specialized servers and owners, and high-end mainframe systems, virtualization is now increasingly available. The resulting VMM can support a wider range of legacy and future operating systems while maintaining high performance [29].
The classic benefits of virtualization include better utilization, manageability, and reliability of core framework systems. Multiple users with different operating system requirements can more easily share a virtualized server, operating system updates can be organized on virtual machines to minimize downtime, and the failures of the guest software can be isolated on the virtual machines on which they are produced. While these benefits have traditionally been considered valuable in high-end server systems, recent academic research and new VMM-based emerging products suggest that the benefits of virtualization have greater attractiveness in a wide range of both server and client systems [29].
Virtualization can improve overall system security and reliability by isolating multiple software stacks into self proper virtual machines. Security can be improved because the instructions can be limited to the VM on which they occur, while reliability can be improved because the software failures on one VM do not affect the other VMs [29].
Virtualization allows running the two environments on the same machine, as can be seen in Figure 3, so that these two environments are completely isolated from each other.
In the case of optimization algorithms, the GA algorithm runs on the OS1 operating system and the statistical analysis is executed on the OS2 operating system. Both operating systems run on top of the virtual machine monitor. VMM virtualizes all resources (for example, processors, memory, secondary storage, and networks) and allocates them to the various virtual machines running over VMM [30].
Figure 4 depicts the virtualization process, where the VMM creates an abstraction layer between the host hardware and the virtual machine operating system, appropriately managing its core resources (CPU, memory, storage, and network connections).

4. Virtualized Distributed Processing System Used

This section describes the virtualized distributed platform for executing bio-inspired optimization algorithms. The computer system consists of network modules, storage, and processing. In Figure 5a is the network module, while Figure 5b shows the storage, and finally, Figure 5c shows the processing system.
This computer system corresponds to the High Performance Computing Center (Centro de Computación de Alto Desempeño—CECAD) of the Universidad Distrital Francisco José de Caldas (UDFJC) which provides a distributed computing service in a virtualized way to the researchers of the UDFJC. Once the resources requested by the researcher are allocated, access to the system can be done remotely.

Infrastructure Used

The characteristics of the infrastructure used are:
  • Operating system: Ubuntu Version 18.04.
  • RAM memory (GB): 14.5.
  • Number of processors: 16.
  • Main storage (GB): 80 GB.
  • Secondary storage (GB): not used.
  • Software used: Octave.
Figure 5c shows the physical appearance of the server used. The features of the R610 Server are:
  • 16 processors (Xeon E5570, 2.93 GHz).
  • 16 GB DDR3.
  • 73 GB hard disk.
The CECAD private cloud has multiple R610 compute nodes, when using Openstack to provide infrastructure as a service (similar to Amazon EC2); this technology takes one of these servers to deploy the requested instance, in this case the "AlgoritmosB" instance. On the other hand, even though the server storage is apparently limited (73 GB), it has been configured through an architecture that uses the CEPH File System tool, a feature that allows allocating higher storage, in this case of 250 GB, since CEPH is configured to allow nodes to access 100 TB storage from the CECAD SAN. Figure 6 shows the instances using Openstack. The characteristics of the instance used are presented in Figure 7.

5. Bio-Inspirated Optimization Algorithms

The bio-inspired optimization algorithms considered for the testing of the processing system are: genetic algorithms, differential evolution, and optimization based on swarms of particles. The following is the description of these algorithms; then the parameter configuration used is shown.

5.1. Genetic Algorithms

Genetic algorithms (GA) are one of the approaches to stochastic optimization, which is a reference to evolutionary computing algorithms that are based on principles of natural evolution and survival [31]. A GA seeks to improve a population by establishing a point at which the performance function is optimized; thus, simultaneously, multiple candidate solutions are considered [31]. The general steps of a genetic algorithm are as follows:
  • Start the population randomly.
  • Evaluate the performance of each individual.
  • Stochastically select the best individuals.
  • Apply the elitism operator.
  • Apply the crossover operator.
  • Apply the mutation operator.
  • If the completion criterion is not met, return to step 2.
  • Finish by meeting the stop criterion and establish the final solution.
In the first step an initial population is generated randomly and the fitness function is evaluated for each individual. Then it can use an elitist selection strategy where the best individuals determined by the aptitude assessment move on to the next generation. The next step is to stochastically apply the crossover operator where the parents are selected according to the aptitude value; in this way, individuals who have higher fitness values are selected more frequently. For each pair of parents, the respective crossing is made at random; when not crossing, two children are formed that are copies of the two parents. Subsequently, the operator of the mutation is applied where an element of the individual encoding that has been obtained in the previous step is changed randomly. Finally, the values of the objective function are calculated for the new population and the algorithm is terminated if the stop criterion is met [31,32].

5.2. Differential Evolution Algorithm

The differential evolution (DE) algorithm was initially proposed by Storn and Price [33,34]; this corresponds to an evolutionary computing algorithm where the next population is established considering the subtraction between individuals of the current population using a crossover/recombination operator after the mutation [35]. The main steps of a DE algorithm are as follows:
  • Initialize the population in the solution space.
  • Apply the subtraction operator.
  • Apply the recombination operator.
  • Evaluate the performance of each individual.
  • Perform the selection process.
  • If the completion criterion is not met, return to step 2.
  • Finish by meeting the stop criterion and establishing the final solution.
In a first instance, the population is started randomly; then the subtraction operator is applied, followed by the recombination operator; in the next step, the selection process is performed. These processes are performed until the criterion is met.
Taking two randomly chosen individuals q 1 and q 2 from the population, the subtraction operator incorporates the difference between these individuals into a third individual q 3 in such a way that it has a new p i individual which corresponds to:
p i = q 1 + μ ( q 2 q 3 )
Using the mutation constant μ > 0 , the subtraction between individuals is set. After the mutation, a recombination operation is performed on each q i individual to generate a u i individual which is constructed by mixing the p i and q i components. This is what a random number is used for P [ 0 , 1 ] having the follow equation:
u i = p i ( l ) , if   rand < P ; q i ( l ) , otherwise .
If an improvement of the objective function is achieved with the intermediate individual u i , this replaces the individual q i ; otherwise, it remains q i in the next generation.

5.3. Particle Swarm Optimization Algorithm

The concept of swarm-based optimization was proposed by James Kennedy and Russell Eberhart based on the social behavior of bird flocks [6]. In general, the steps involved in the PSO (particle swarm optimization) algorithm are as follows:
  • Initialize the swarm in the solution space.
  • Evaluate the performance of each individual.
  • Find the best individual and collective performances.
  • Calculate the speed and position of each individual.
  • Move each individual to the new position.
  • If the completion criterion is not met, return to step 2.
  • Finish by meeting the stop criterion and establishing the final solution.
As it is appreciated, in a first instance the swarm is initialized in the solution space; then the performance of each individual is evaluated by finding the best individual and collective performances; with these values the speed of each particle is calculated. That is used to determine the displacement of each individual to the new position. The above processes are carried out until a completion criterion. The following expression is used to establish the position of each individual:
x i [ n + 1 ] = x i [ n ] + v i [ n + 1 ]
For the calculation of the speed there are different alternatives, one of the most representative ones being the one which incorporates an inertia factor described by the following equation:
v i [ n + 1 ] = w v i [ n ] + α p [ β p i ( x p i x i [ n ] ) ] + α g [ β g i ( x g x i [ n ] ) ]
In the above equations, v i and x i correspond to the velocity and position of the individual i-th. On the other hand, x p i is the best position found by the i-th individual, and x g is the best position found by the swarm. Additionally, β p i and β g i are random numbers in the range [ 0 , 1 ] . Finally, w is an inertia value, α p acceleration constant of the social part and α g acceleration of the cognitive part.

6. Experiments Configuration

The configuration of the algorithms is done considering two representative cases, which are executed on eight test functions with and without virtualization. With these experiments we sought to show that the virtualization scheme allows reducing the execution times of the algorithms without altering their performances.

6.1. Configuration for Genetic Algorithms

Two recommended standard configurations are used for the case considered. The first configuration uses 50 individuals with mutation probability of 0.001 and crossover probability of 0.6 [36]. The second configuration has 30 individuals, mutation probability of 0.01 , and probability associated to the genes exchanged between individuals equal to 0.9 [37]. Table 1 summarizes the deployed configurations.

6.2. Configuration for Differential Evolution

The method of differential evolution starts from a randomly initialized population, with which the following population establishes itself, considering the difference between individuals of that population. Two configurations of the differential evolution algorithm were used for this algorithm, considering the recommendations proposed by [38].
The first configuration has 40 members, probability of crossing equal to 0.9784 , and the step size of 0.6876 ; for the second configuration, taking into account the rule given by Price and Storn [33], the number of members is 20, the probability of crossing is 1 since the high values guarantee the appropriate contour conditions for the evolution algorithm [34], and the step size is 0.85 . The summary of the configurations is presented in Table 2.

6.3. Particle Swarm Optimization Configuration

For PSO, the two configurations proposed by [39] are used for the PSO algorithm with inertia factor. Parameter selection is based on an analysis of the dynamic behavior of the swarm. Both cases use 30 particles; Table 3 shows the selected configurations.

7. Tests Functions

Ideally, the test functions chosen to evaluate an optimization algorithm should contain features similar to the real-world problem. However, specialized literature is characterized by the use of artificial functions to perform tests on these optimization algorithms.
The multi-dimensional test functions can be seen in Table 4; the selection was made considering the reports [40,41,42,43], wherein they are used for testing with bio-inspired algorithms. The characteristics of the test functions can be seen in Table 5. This table shows that the functions employed have different limits of the search space.
As noted in Table 5, the test functions f 2 , f 3 , and f 4 have the minimum localized value at a non-zero point, while for all other test functions the global minimum is located at zero.

8. Experimental Results

This section compares the performances of the selected bio-inspired algorithms with and without the virtualization scheme. The results are first reviewed considering the value obtained from the target function, and then the results obtained from the execution time (run-time) are analyzed.
In these results M2 corresponds to the distributed processing scheme, while M1 refers to the dedicated PC (personal computer) with the following characteristics.
  • Operating System: Windows 8.
  • RAM Memory (GB): 6.
  • Processor: i5 2.60 GHz.
  • Main storage (GB): 680 GB.
  • Software used: Octave.
In both cases, the algorithms and data collection were performed in the free Octave software. Each configuration ran 50 times for each test function (taking 10 dimensions).
In order to identify the results, the first part of the respective tag refers to the class of algorithm AG, DE, or PSO; then the type of configuration C1 or C2 is indicated; finally, the machine is M1 or M2, with which the algorithm was executed.
Here are used different source files for experiments; for the implementation of GA the source from GitHub posted by user "shenbennwdsl" in [44] was used; in the case of DE, the files developed by Rainer Storn, Ken Price, Arnold Neumaier, and Jim Van Zandt posted in [45]; for PSO the source file developed by Matthew P. Kelly downloaded from [46]; for test functions, the archives developed by Brian Birge from [47] were used; and Sonja Surjanovic and Derek Bingham posted in [48]. Finally, all archives used in this work to implement the experiments and present the results can be downloaded from GitHub in [49].

8.1. GA Algorithm Results

Table 6 shows the results when performing the execution of the genetic algorithm for the value of the target function, and Table 7 shows the results for the run-time. Graphically, the results obtained for the values of the target function can be seen in a diagram of stems and sheets in Figure 8 and Figure 9 for run-time. In Figure 8 the configurations associated to M1 and M2 present similar results for the objective functions; meanwhile, in Figure 9 in all cases the processing time associated with machine M2 is less that than of M1.

8.2. DE Algorithm Results

The execution of the differential evolution algorithm in Table 8 shows the summary of the statistical results for the value of the target function, while Table 9 has the run-time. The graphical presentation of the results obtained is shown in Figure 10; the diagrams of stems and sheets for the value of the target function are also shown. Figure 11 provides the diagrams of stems and sheets for the run-time. The results of Figure 10 show that configuration C1 of the DE algorithm executed in machine M1 has similar results to those obtained in machine M2 for the value of objective function; the same happens using configuration C2 when executed in M1 and M2. Meanwhile, considering the run-time, Figure 11 shows that when executing the configuration C1 in machine M2, less run-time is observed compared to the same execution using machine M1; for configuration C2 there is also less run-time when using M2.

8.3. PSO Algorithm Results

Table 10 and Table 11 display the statistical results for the value of the target function and the run-time for the PSO algorithm. The stem and leaf diagrams of the results can be seen in Figure 12 and Figure 13 for the values of the objective function and the execution time respectively. As observed in Figure 12, the value of the target function is not affected by the distributed processing scheme, while the run-time is reduced by using this schema (Figure 13).

8.4. Algorithm Comparison

It is worth pointing that this work is focused in the observations of the changes amidst the execution of each algorithm in the machines employed (M1 and M2); therefore, the results are presented separately for each algorithm; nevertheless, it is also important to bear in mind that after defining the hardware to implement the experiments, the study should be focused on the algorithm comparison (variation among them), which requires the organization of the results in a single table. As an example, Table 12 displays the summary of results considering the objective function while Table 13 displays the results for the execution time.
Usually, these tables highlight the best value reached by one of the algorithms (for each test function) in a way that a global comparison can be made for both the obtained value of the objective function and the execution time for the algorithms employed. Table 12 and Table 13 show an example each of the way by which the best obtained values can be noted.

8.5. Execution Time Analysis

As previously observed in the results obtained, the values of the target function for configurations C1 and C2 are not affected by the type of processing system—executed in M1 or M2, but the processing time is affected. For the above, this section only performs an analysis of the execution times of the algorithms, this in order to observe the difference present in the execution time.
First, Table 14 contains the total run-time values for each configuration of the GA algorithm in each target function. Secondly, the total results for the time of execution of the DE algorithm can be seen in Table 15. Finally, the total run-time values of the PSO algorithm are presented in the Table 16.
The total values for the execution times of each algorithm can be seen in Table 17. In this way, a total difference of 11,054 s that corresponds to 3.07 h in the execution of all algorithms can be seen, equivalent to 25.12 % . In this order, the percentage corresponds to time gained for GA which is 10.28 % for DE 8.72 % and 6.12 % for PSO. The percentage of time gained (TG) is calculated using the following equation:
TG = Difference Total time for M 1 100 %

9. Discussion

In the results, it can be seen that on average, the values obtained from the objective functions are not affected, which allows the experiments to be reproduced on different machines. It is also appreciated that when using the CECAD machine the processing time decreases, which was sought to verify with this work.
It should be noted that in this work that the effects of the parameters of the algorithms on the objective function were not closely analyzed, since we sought to observe that the same result was obtained on average for the two machines used. In order to carry out performance tests between algorithms, we first seek to establish the most appropriate computing system.
Although we observed the advantage that the virtualized distributed processing system has for the execution of the algorithms considered, only one possible configuration offered by CECAD was used, which was limited to the requests of the researchers at the time of executing the algorithms. To consider this aspect in additional work, the availability of resources held in CECAD for an average researcher as well as the time allocated for their use can be included in the study. About this, it is necessary to take in account the request in the way that the administration can manage these resources and not leave other researchers without access.
For further research, the report [50,51] can be taken as a reference since greater size and complexity of the test functions can be considered for testing the efficacy of the distributed system. In the first place, the report [50] can be considered for the testing of bio-inspired algorithms using parallel PC cluster systems for large-scale multimodal functions; secondly, the functions discussed in [51] can be used to observe the problem complexity.
Considering the restrictions of CECAD for extending this work, other computing systems can be considered, such as a non-homogeneous cluster of PCs; and a cluster with mini PCs, such as small single-board computers like Raspberry Pi. Additionally, we could consider evaluate various ways of distributing the execution of bio-inspired optimization algorithms.

10. Conclusions

Aiming at a wide range of experiments, the tests were performed with three bio-inspired algorithms under different parameter configurations using eight test functions which had different characteristics. To have comparable results, the well-known bio-inspired optimization algorithms GA, DE, and PSO were used.
The results showed that the value of the target function is not affected by the distributed virtualization scheme, while the run-time is reduced by using the distributed virtualization system. Thus, the algorithms can be executed on different machines without affecting the result.
Distributed and virtualization technologies can optimize performance and simplify the management of the information infrastructure, as they do when running bio-inspired optimization algorithms. As seen in the results, the processing time decreases when using the CECAD.
In the development of this work it was observed that the distributed virtualization system under consideration is an adequate platform for the execution of bio-inspired optimization algorithms. However, in the case of CECAD, the amount of resources and the computing capacity are subjected to the number of requests made by the researchers.

Author Contributions

Conceptualization, N.G., H.E., and J.B.; methodology, N.G. and H.E.; project administration, J.B.; supervision, J.B.; validation, N.G.; writing—original draft, H.E.; writing—review and editing, N.G., H.E., and J.B. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Acknowledgments

The authors express gratitude to the Universidad Distrital Francisco José de Caldas, and also to the CECAD (Centro de Computación de Alto Desempeño) High Performance Computing Center and the engineer Pedro J. Vargas Barrios.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. He, Z.; Liu, G.; Ma, X.; Chen, Q. GeoBeam: A distributed computing framework for spatial data. Comput. Geosci. 2019, 131, 15–22. [Google Scholar] [CrossRef]
  2. Espitia, H.; Sofrony, J. Statistical analysis for vortex particle swarm optimization. Appl. Soft Comput. 2018, 67, 370–386. [Google Scholar] [CrossRef]
  3. Weise, T. Global Optimization Algorithms—Theory and Application; Self-Published Thomas Weise: 2009. Available online: http://www.it-weise.de/projects/book.pdf (accessed on 7 July 2020).
  4. Dorigo, M.; Di-Caro, G.; Gambardella, L. Ant algorithms for discrete optimization. Artif. Life 1999, 5, 137–172. [Google Scholar] [CrossRef] [Green Version]
  5. Passino, K. Biomimicry of bacterial foraging for distributed optimization and control. IEEE Control. Syst. Mag. 2002, 22, 52–67. [Google Scholar]
  6. Russell, E.; James, K. Particle swarm optimization. In Proceedings of the ICNN’95—IEEE Proceedings Neural Networks, Perth, WA, Australia, 27 November–1 December 1995. [Google Scholar]
  7. Viroli, M.; Beal, J.; Damiani, F.; Audrito, G.; Casadei, R.; Pianini, D. From distributed coordination to field calculus and aggregate computing. J. Log. Algebr. Methods Program. 2019, 109, 100486. [Google Scholar] [CrossRef]
  8. Cooke, R.A.; Fahmy, S.A. A model for distributed in-network and near-edge computing with heterogeneous hardware. Future Gener. Comput. Syst. 2020, 105, 395–409. [Google Scholar] [CrossRef]
  9. Newhall, T.; Danner, A.; Webb, K.C. Pervasive parallel and distributed computing in a liberal arts college curriculum. J. Parallel Distrib. Comput. 2017, 105, 53–62. [Google Scholar] [CrossRef]
  10. Vo, A.V.; Laefer, D.F.; Smolic, A.; Zolanvari, S.M.I. Per-point processing for detailed urban solar estimation with aerial laser scanning and distributed computing. ISPRS J. Photogramm. Remote Sens. 2019, 155, 119–135. [Google Scholar] [CrossRef]
  11. Xu, Y.; Liu, H.; Long, Z. A distributed computing framework for wind speed big data forecasting on Apache Spark. Sustain. Energy Technol. Assess. 2020, 37, 100582. [Google Scholar] [CrossRef]
  12. Kim, J.; Park, J.; Hyun, S.; Fleisher, D.H.; Kim, K.S. Development of an automated gridded crop growth simulation support system for distributed computing with virtual machines. Comput. Electron. Agric. 2020, 169, 105196. [Google Scholar] [CrossRef]
  13. Noshy, M.; Ibrahim, A.; Ali, H.A. Optimization of live virtual machine migration in cloud computing: A survey and future directions. J. Netw. Comput. Appl. 2018, 110, 1–10. [Google Scholar] [CrossRef]
  14. Smith, J.E.; Nair, R. Virtual Machines, Versatile Platforms for Systems and Processes; Morgan Kaufmann: Burlington, MA, USA, 2005. [Google Scholar]
  15. Figueiredo, R.; Dinda, P.A.; Fortes, J. Resource Virtualization Renaissance. Computer 2005, 38, 28–31. [Google Scholar] [CrossRef]
  16. Abeni, L.; Biondi, A.; Bini, E. Hierarchical scheduling of real-time tasks over Linux-based virtual machines. J. Syst. Softw. 2019, 149, 234–249. [Google Scholar] [CrossRef]
  17. Elsedfy, M.O.; Murtada, W.A.; Abdulqawi, E.F.; Gad-Allah, M. A real-time virtual machine for task placement in loosely-coupled computer systems. Heliyon 2019, 5, e01998. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  18. Abohamama, A.S.; Hamouda, E. A hybrid energy-Aware virtual machine placement algorithm for cloud environments. Expert Syst. Appl. 2020, 150, 113306. [Google Scholar] [CrossRef]
  19. Wei, C.; Hu, Z.H.; Wang, Y.G. Exact algorithms for energy-efficient virtual machine placement in data centers. Future Gener. Comput. Syst. 2020, 106, 77–91. [Google Scholar] [CrossRef]
  20. Hsieh, S.Y.; Liu, C.S.; Buyya, R.; Zomaya, A.Y. Utilization-prediction-aware virtual machine consolidation approach for energy-efficient cloud data centers. J. Parallel Distrib. Comput. 2020, 139, 99–109. [Google Scholar] [CrossRef]
  21. Van Steenm, M.; Tanenbaum, A.S. A brief introduction to distributed systems. Computing 2016, 98, 967–1009. [Google Scholar] [CrossRef] [Green Version]
  22. Sitaram, D.; Manjunath, G. Chapter 5—Paradigms for Developing Cloud Applications. In Moving to the Cloud Developing Apps in the New World of Cloud Computing; Elsevier: Amsterdam, The Netherlands, 2012; pp. 205–253. [Google Scholar]
  23. Sterling, T.; Anderson, M.; Brodowicz, M. Chapter 2—HPC Architecture 1: Systems and Technologies. In High Performance Computing Modern Systems and Practices; Morgan Kaufmann: Burlington, MA, USA, 2018; pp. 43–82. [Google Scholar]
  24. Pacheco, P.S. Chapter 2—Parallel Hardware and Parallel Software. In An Introduction to Parallel Programming; Elsevier: Amsterdam, The Netherlands, 2011; pp. 15–81. [Google Scholar]
  25. Gélvez, N.; Moreno, C.; Ruiz, D. La virtualización, un enfoque empresarial hacia el futuro. Redes de Ingeniería 2013, 4, 116–126. [Google Scholar] [CrossRef]
  26. Holm, N.T. A Cosmology for a Different Computer Universe: Data Model, Mechanisms, Virtual Machine and Visualization Infrastructure. J. Digit. Inf. 2004, 5, 77. [Google Scholar]
  27. Turban, E.; King, D.; Lee, J.; Viehland, D. Chapter 19: Building E-Commerce Applications and Infrastructure. In Electronic Commerce A Managerial Perspective, 5th ed.; Prentice-Hall: Upper Saddle River, NJ, USA, 2008; pp. 1–29. [Google Scholar]
  28. Dillon, T.; Chen, W.; Chang, E. Cloud Computing: Issues and Challenges. In Proceedings of the 24th IEEE International Conference on Advanced Information Networking and Applications, Perth, WA, Australia, 20–23 April 2010. [Google Scholar]
  29. Uhlig, R.; Neiger, G.; Rodgers, D.; Santoni, A.L.; Martins, F.C.M.; Anderson, A.V.; Bennett, S.M.; Kagi, A.; Leung, F.H.; Smith, L. Intel virtualization technology. Computer 2005, 38, 48–56. [Google Scholar] [CrossRef]
  30. Menascé, D.A. Virtualization: Concepts, Applications, and Performance Modeling. In Proceedings of the 31th International Computer Measurement Group Conference, Orlando, FL, USA, 4–9 December 2005; pp. 407–414. [Google Scholar]
  31. Spall, J. Stochastic optimization. In Handbook of Computational Statistics; Springer: Berlin/Heidelberg, Germany, 2004; pp. 170–194. [Google Scholar]
  32. Mitchell, M. An introduction to genetic algorithms. In A Bradford Book; The MIT Press: Cambridge, MA, USA, 1998. [Google Scholar]
  33. Storn, R.; Price, K. Differential evolution—A simple and efficient heuristic for global optimization over continuous spaces. J. Glob. Optim. 1997, 11, 341–359. [Google Scholar] [CrossRef]
  34. Price, K.; Storn, R.; Lampinen, J. Differential Evolution A Practical Approach to Global Optimization; Springer Natural Computing Series: Berlin/Heidelberg, Germany, 2005. [Google Scholar]
  35. Bansal, J.C.; Singh, P.K.; Saraswat, M.; Verma, A.; Jadon, S.; Abraham, A. Inertia Weight Strategies in Particle Swarm Optimization. In Proceedings of the IEEE Third World Congress on Nature and Biologically Inspired Computing, Salamanca, Spain, 19–21 October 2011. [Google Scholar]
  36. De-Jong, K.; Spears, W. An Analysis of the Interacting Roles of Population Size and Crossover in Genetic Algorithms. In Proceedings of the First Workshop Parallel Problem Solving from Nature, Dortmund, Germany, 1–3 October 1990. [Google Scholar]
  37. Grefenstette, J.J. Optimization of Control Parameters for Genetic Algorithms. IEEE Trans. Syst. Man Cybern. 1986, 16, 122–128. [Google Scholar] [CrossRef]
  38. Hvass, M. Good Parameters for Differential Evolution; Technical Report no. HL1002; Hvass Laboratories: 2010. Available online: https://pdfs.semanticscholar.org/48aa/36e1496c56904f9f6dfc15323e0c45e34a4c.pdf (accessed on 7 July 2020).
  39. Trelea, I.C. The particle swarm optimization algorithm: Convergence analysis and parameter selection. Inf. Process. Lett. 2003, 85, 317–325. [Google Scholar] [CrossRef]
  40. Bratton, D.; Kennedy, J. Defining a standard for particle swarm optimization. In Proceedings of the IEEE Swarm Intelligence Symposium (SIS), Honolulu, HI, USA, 1–5 April 2007. [Google Scholar]
  41. Evers, G. An Automatic Regrouping Mechanism to Deal with Stagnation in Particle Swarm Optimization. Master’s Thesis, University of Texas-Pan American, Edinburg, TX, USA, 2009. [Google Scholar]
  42. Hvass, M. Tuning & Simplifying Heuristical Optimization. Ph.D. Thesis, University of Southampton, Southampton, UK, 2010. [Google Scholar]
  43. Kennedy, J.; Eberhart, R.; Shi, Y. Swarm Intelligence; Morgan Kaufmann Publishers: Burlington, MA, USA, 2001. [Google Scholar]
  44. GitHub. Available online: https://gist.github.com/shenbennwdsl/a2aa06de6f841e98e187 (accessed on 5 November 2019).
  45. GitHub. Available online: https://github.com/sriki18/MDEpBX-Matlab/blob/master/deopt.m (accessed on 5 November 2019).
  46. GitHub. Available online: https://github.com/MatthewPeterKelly/ParticleSwarmOptimization (accessed on 5 November 2019).
  47. MathWorks. Available online: https://www.mathworks.com/matlabcentral/fileexchange/7506-particle-swarm-optimization-toolbox (accessed on 5 November 2019).
  48. Virtual Library of Simulation Experiments. Available online: http://www.sfu.ca/~ssurjano/optimization.html (accessed on 5 November 2019).
  49. GitHub. Available online: https://github.com/IngGelvezGarcia/Experimental-test-Evolutive-Algorithms (accessed on 9 July 2020).
  50. Fan, S.-K.S.; Chang, J.-M. Dynamic multi-swarm particle swarm optimizer using parallel PC cluster systems for global optimization of large-scale multimodal functions. Eng. Optim. 2010, 42, 431–451. [Google Scholar] [CrossRef]
  51. Fan, S.-K.S.; Zahara, E. A Hybrid Simplex Search and Particle Swarm Optimization for Unconstrained Optimization. Eur. J. Oper. Res. 2007, 181, 527–548. [Google Scholar] [CrossRef]
Figure 1. Shared memory systems.
Figure 1. Shared memory systems.
Symmetry 12 01192 g001
Figure 2. Distributed memory systems.
Figure 2. Distributed memory systems.
Symmetry 12 01192 g002
Figure 3. Example of the virtualization process [30].
Figure 3. Example of the virtualization process [30].
Symmetry 12 01192 g003
Figure 4. Representation of the virtualization process [25].
Figure 4. Representation of the virtualization process [25].
Symmetry 12 01192 g004
Figure 5. Virtualized distributed processing system used. (a) Network equipment used; (b) storage equipment used; (c) processing equipment used.
Figure 5. Virtualized distributed processing system used. (a) Network equipment used; (b) storage equipment used; (c) processing equipment used.
Symmetry 12 01192 g005
Figure 6. Instances using Openstack.
Figure 6. Instances using Openstack.
Symmetry 12 01192 g006
Figure 7. Description of the instances used.
Figure 7. Description of the instances used.
Symmetry 12 01192 g007
Figure 8. Box plot considering the value of the objective function using GA, where the results for M1 and M2 do not present the difference between the respective configurations C1 and C2. (a) Diagram for f 1 . (b) Diagram for f 2 . (c) Diagram for f 3 . (d) Diagram for f 4 . (e) Diagram for f 5 . (f) Diagram for f 6 . (g) Diagram for f 7 . (h) Diagram for f 8 .
Figure 8. Box plot considering the value of the objective function using GA, where the results for M1 and M2 do not present the difference between the respective configurations C1 and C2. (a) Diagram for f 1 . (b) Diagram for f 2 . (c) Diagram for f 3 . (d) Diagram for f 4 . (e) Diagram for f 5 . (f) Diagram for f 6 . (g) Diagram for f 7 . (h) Diagram for f 8 .
Symmetry 12 01192 g008
Figure 9. Box plot for the value of run-time using GA, where the results for M1 and M2 show the difference between the respective configurations C1 and C2. (a) Diagram for f 1 . (b) Diagram for f 2 . (c) Diagram for f 3 . (d) Diagram for f 4 . (e) Diagram for f 5 . (f) Diagram for f 6 . (g) Diagram for f 7 . (h) Diagram for f 8 .
Figure 9. Box plot for the value of run-time using GA, where the results for M1 and M2 show the difference between the respective configurations C1 and C2. (a) Diagram for f 1 . (b) Diagram for f 2 . (c) Diagram for f 3 . (d) Diagram for f 4 . (e) Diagram for f 5 . (f) Diagram for f 6 . (g) Diagram for f 7 . (h) Diagram for f 8 .
Symmetry 12 01192 g009
Figure 10. Box plot considering the value of the objective function using DE, where the results for M1 and M2 do not present the differences between the respective configurations C1 and C2. (a) Diagram for f 1 . (b) Diagram for f 2 . (c) Diagram for f 3 . (d) Diagram for f 4 . (e) Diagram for f 5 . (f) Diagram for f 6 . (g) Diagram for f 7 , (h) Diagram for f 8 .
Figure 10. Box plot considering the value of the objective function using DE, where the results for M1 and M2 do not present the differences between the respective configurations C1 and C2. (a) Diagram for f 1 . (b) Diagram for f 2 . (c) Diagram for f 3 . (d) Diagram for f 4 . (e) Diagram for f 5 . (f) Diagram for f 6 . (g) Diagram for f 7 , (h) Diagram for f 8 .
Symmetry 12 01192 g010
Figure 11. Box plot for the value of run-time using DE, where the results for M1 and M2 show the difference between the respective configurations C1 and C2. (a) Diagram for f 1 . (b) Diagram for f 2 . (c) Diagram for f 3 . (d) Diagram for f 4 . (e) Diagram for f 5 . (f) Diagram for f 6 . (g) Diagram for f 7 . (h) Diagram for f 8 .
Figure 11. Box plot for the value of run-time using DE, where the results for M1 and M2 show the difference between the respective configurations C1 and C2. (a) Diagram for f 1 . (b) Diagram for f 2 . (c) Diagram for f 3 . (d) Diagram for f 4 . (e) Diagram for f 5 . (f) Diagram for f 6 . (g) Diagram for f 7 . (h) Diagram for f 8 .
Symmetry 12 01192 g011
Figure 12. Box plot considering the value of the objective function using PSO, where the results for M1 and M2 do not present the difference for the respective configurations C1 and C2. (a) Diagram for f 1 . (b) Diagram for f 2 . (c) Diagram for f 3 . (d) Diagram for f 4 . (e) Diagram for f 5 . (f) Diagram for f 6 . (g) Diagram for f 7 . (h) Diagram for f 8 .
Figure 12. Box plot considering the value of the objective function using PSO, where the results for M1 and M2 do not present the difference for the respective configurations C1 and C2. (a) Diagram for f 1 . (b) Diagram for f 2 . (c) Diagram for f 3 . (d) Diagram for f 4 . (e) Diagram for f 5 . (f) Diagram for f 6 . (g) Diagram for f 7 . (h) Diagram for f 8 .
Symmetry 12 01192 g012
Figure 13. Box plot for the value of run-time using PSO, where the results for M1 and M2 show the difference for the respective configurations C1 and C2. (a) Diagram for f 1 . (b) Diagram for f 2 . (c) Diagram for f 3 . (d) Diagram for f 4 . (e) Diagram for f 5 . (f) Diagram for f 6 . (g) Diagram for f 7 . (h) Diagram for f 8 .
Figure 13. Box plot for the value of run-time using PSO, where the results for M1 and M2 show the difference for the respective configurations C1 and C2. (a) Diagram for f 1 . (b) Diagram for f 2 . (c) Diagram for f 3 . (d) Diagram for f 4 . (e) Diagram for f 5 . (f) Diagram for f 6 . (g) Diagram for f 7 . (h) Diagram for f 8 .
Symmetry 12 01192 g013
Table 1. Parameters configurations for the genetic algorithm.
Table 1. Parameters configurations for the genetic algorithm.
ConfigurationParameters
PopulationMutation (Prob.)Crossover (Prob.)
AG-C1500.0010.6
AG-C2300.010.9
Table 2. Parameters’ configurations for the differential evolution algorithm.
Table 2. Parameters’ configurations for the differential evolution algorithm.
ConfigurationParameters
PopulationCrossover (Prob.)Step Size
DE-C1480.97840.6876
DE-C22010.85
Table 3. Parameter configuration for particle swarm optimization.
Table 3. Parameter configuration for particle swarm optimization.
ConfigurationParameters
w α p α g
PSO-C10.6001.71.7
PSO-C20.7291.4941.494
Table 4. Generalized test functions used.
Table 4. Generalized test functions used.
NameDimLimitsEquation
SphericalD [ 100 , 100 ] D f 1 ( x ) = i = 1 D x i 2
LevyD [ 10 , 10 ] D f 2 ( x ) = sin 2 ( π w 1 ) + i = 1 D 1 ( w i 1 ) 2 [ 1 + 10 sin 2 ( π w i + 1 ) ] + ( w D 1 ) 2 [ 1 + sin 2 ( 2 π w D ) ] , where, w i = 1 + x i 1 4
Styblinski TangD [ 5.12 , 5.12 ] D f 3 ( x ) = 1 2 i = 1 D x i 4 16 x i 2 + 5 x i
Rosenbrock RotateD [ 30 , 30 ] D f 4 ( x ) = i = 1 D 1 100 ( x i + 1 + x i 2 ) 2 + ( x i + 1 ) 2 2
GriewankD [ 50 , 50 ] D f 5 ( x ) = 1 + 1 4000 i = 1 D x i 2 i = 1 D cos x i i
RastriginD [ 5.12 , 5.12 ] D f 6 ( x ) = i = 1 D x i 2 10 cos ( 2 π x i ) + 10
SchafferD [ 30 , 30 ] D f 7 ( x ) = i = 1 D 1 ( x i 2 + x i + 1 2 ) 0.25 sin 2 ( 50 ( x i 2 + x i + 1 2 ) 0.1 ) + 1
AckleyD [ 30 , 30 ] D f 8 ( x ) = e + 20 20 exp 0.2 1 D i = 1 D x i D exp 1 D i = 1 D cos ( 2 π x i )
Table 5. Features of the test functions used.
Table 5. Features of the test functions used.
NameDimMulti-ModalOptimal PointLimitsOptimal Value
SphericalDNo ( 0.0 ) D [ 100 , 100 ] D 0
LevyDYes ( 1.0 ) D [ 10 , 10 ] D 0
Styblinski TangDYes ( 2.903534 ) D [ 5.12 , 5.12 ] D 39.16599 D
Rosenbrock RotateDYes ( 1.0 ) D [ 30 , 30 ] D 0
GriewankDYes ( 0.0 ) D [ 50 , 50 ] D 0
RastriginDYes ( 0.0 ) D [ 5.12 , 5.12 ] D 0
SchafferDYes ( 0.0 ) D [ 30 , 30 ] D 0
AckleyDYes ( 0.0 ) D [ 30 , 30 ] D 0
Table 6. Statistical results for the value of the objective function using GA.
Table 6. Statistical results for the value of the objective function using GA.
f 1 GA-C1-M1GA-C2-M1GA-C1-M2GA-C2-M2
Max803.1851.1693.48878.87
Min110.12231.6696.881198.67
Average396.34512.71363.63524.55
STD164.16145.59165.86165.07
f 2 GA-C1-M1GA-C2-M1GA-C1-M2GA-C2-M2
Max4.06822.87564.18862.9012
Min0.435320.738580.501670.49925
Average2.00621.67421.72041.6712
STD1.0940.435560.916030.49293
f 3 GA-C1-M1GA-C2-M1GA-C1-M2GA-C2-M2
Max−361.86−369.18−360.01−366.04
Min−385.82−390.21−389.06−385.25
Average−376.93−378.57−377.46−378.18
STD5.32544.36975.53374.2567
f 4 GA-C1-M1GA-C2-M1GA-C1-M2GA-C2-M2
Max 7.484 × 10 5 2.6675 × 10 5 6.5106 × 10 5 2.7881 × 10 5
Min11809226741059214904
Average 2.1762 × 10 5 1.2786 × 10 5 2.032 × 10 5 1.1931 × 10 5
STD 1.81 × 10 5 61355 1.6132 × 10 5 59792
f 5 GA-C1-M1GA-C2-M1GA-C1-M2GA-C2-M2
Max1.00790.905741.04220.95218
Min0.137950.518980.135180.4226
Average0.625590.726190.643540.72477
STD0.291690.100530.277510.11398
f 6 GA-C1-M1GA-C2-M1GA-C1-M2GA-C2-M2
Max26.96722.36926.16123.734
Min9.021211.3516.84127.9241
Average18.40417.32316.8717.45
STD4.25452.65094.13173.2635
f 7 GA-C1-M1GA-C2-M1GA-C1-M2GA-C2-M2
Max15.55415.82515.91316.091
Min5.819.54766.10369.749
Average9.747113.1079.418413.142
STD2.18181.35762.13841.3547
f 8 GA-C1-M1GA-C2-M1GA-C1-M2GA-C2-M2
Max3.46245.22113.87395.3901
Min0.882823.30811.26553.0742
Average2.47744.28572.72354.253
STD0.53490.444250.550140.53566
Table 7. Statistical results for the execution time (run-time), using GA.
Table 7. Statistical results for the execution time (run-time), using GA.
f 1 GA-C1-M1GA-C2-M1GA-C1-M2GA-C2-M2
Max24.14414.81823.04513.121
Min21.5814.10919.58512.583
Average22.56714.37620.54812.98
STD0.831580.215880.784650.14111
f 2 GA-C1-M1GA-C2-M1GA-C1-M2GA-C2-M2
Max53.41933.99543.84628.084
Min52.532.74642.46927.325
Average52.70533.57643.30527.68
STD0.193730.456740.402240.29811
f 3 GA-C1-M1GA-C2-M1GA-C1-M2GA-C2-M2
Max33.67521.24127.96116.951
Min32.44420.75327.69316.249
Average32.65620.93327.82916.625
STD0.21890.129870.0511810.19769
f 4 GA-C1-M1GA-C2-M1GA-C1-M2GA-C2-M2
Max26.43217.19523.17714.662
Min25.83216.80922.2814.579
Average26.1216.98822.80614.625
STD0.125770.106710.37930.021441
f 5 GA-C1-M1GA-C2-M1GA-C1-M2GA-C2-M2
Max56.46337.64325.43115.981
Min51.66732.04324.52715.403
Average52.60932.33625.20715.777
STD0.988710.790330.288780.2308
f 6 GA-C1-M1GA-C2-M1GA-C1-M2GA-C2-M2
Max23.76715.58520.96813.325
Min22.68614.81820.18212.843
Average23.01414.91820.42412.898
STD0.221930.138020.300390.063915
f 7 GA-C1-M1GA-C2-M1GA-C1-M2GA-C2-M2
Max26.14717.09723.71815.003
Min25.50516.51423.57114.883
Average25.67516.60823.66114.943
STD0.11980.0874610.0311980.023558
f 8 GA-C1-M1GA-C2-M1GA-C1-M2GA-C2-M2
Max28.48818.88825.42616.091
Min27.717.86225.29415.904
Average27.88518.17925.36515.948
STD0.159470.211930.0329870.029009
Table 8. Statistical results for the value of the objective function using DE.
Table 8. Statistical results for the value of the objective function using DE.
f 1 DE-C1-M1DE-C2-M1DE-C1-M2DE-C2-M2
Max 3.0967 × 10 36 2189.2 6.7457 × 10 36 3135.8
Min 1.1902 × 10 38 33.487 3.852 × 10 39 5.1453
Average 4.4506 × 10 37 637.3 5.6028 × 10 37 738.21
STD 6.8787 × 10 37 508.86 1.1785 × 10 36 680.65
f 2 DE-C1-M1DE-C2-M1DE-C1-M2DE-C2-M2
Max 1.4997 × 10 32 5.303 1.4998 × 10 32 4.4089
Min 1.4997 × 10 32 0.33872 1.4998 × 10 32 0.31488
Average 1.4997 × 10 32 1.8351 1.4998 × 10 32 1.6916
STD 2.7647 × 10 48 1.2641 8.2941 × 10 48 0.99901
f 3 DE-C1-M1DE-C2-M1DE-C1-M2DE-C2-M2
Max−377.52−282.82−377.52−276.36
Min−391.66−362.53−391.66−379.17
Average−390.53−322.34−391.1−324.16
STD3.874117.262.798324.568
f 4 DE-C1-M1DE-C2-M1DE-C1-M2DE-C2-M2
Max3.9866 5.9015 × 10 5 3.9866 2.8148 × 10 5
Min0221.75 1.0452 × 10 29 85.7
Average0.31893795460.07973251649
STD1.0925 1.322 × 10 5 0.5637971056
f 5 DE-C1-M1DE-C2-M1DE-C1-M2DE-C2-M2
Max0.478460.906030.545410.8534
Min0.00985730.1160300.14461
Average0.171140.372720.154150.36911
STD0.125170.165610.122160.17738
f 6 DE-C1-M1DE-C2-M1DE-C1-M2DE-C2-M2
Max18.43661.33922.18989.495
Min1.98997.2842.98497.5864
Average7.631826.9268.996230.295
STD3.521512.0424.460615.282
f 7 DE-C1-M1DE-C2-M1DE-C1-M2DE-C2-M2
Max0.0002531326.5020.0002323531.125
Min 4.4825 × 10 6 6.2426 2.782 × 10 6 7.3592
Average 3.1826 × 10 5 16.616 4.163 × 10 5 16.469
STD 3.8948 × 10 5 4.8337 4.6877 × 10 5 4.8209
f 8 DE-C1-M1DE-C2-M1DE-C1-M2DE-C2-M2
Max 3.1086 × 10 15 15.395 3.1086 × 10 15 13.672
Min 3.1086 × 10 15 1.8371 3.1086 × 10 15 2.0741
Average 3.1086 × 10 15 8.5284 3.1086 × 10 15 7.9863
STD02.897403.0219
Table 9. Statistical results for the execution time using DE.
Table 9. Statistical results for the execution time using DE.
f 1 DE-C1-M1DE-C2-M1DE-C1-M2DE-C2-M2
Max16.2667.597513.6545.8533
Min15.4316.797511.8545.2711
Average15.7727.03812.2455.343
STD0.184570.141970.383290.078526
f 2 DE-C1-M1DE-C2-M1DE-C1-M2DE-C2-M2
Max57.41623.25134.54915.018
Min48.19320.76333.85714.653
Average49.60621.30434.32514.75
STD1.86480.587890.207260.049601
f 3 DE-C1-M1DE-C2-M1DE-C1-M2DE-C2-M2
Max27.86912.06818.8018.2856
Min26.58611.45118.4928.1547
Average27.2411.76518.6338.2207
STD0.239640.134810.0688740.031299
f 4 DE-C1-M1DE-C2-M1DE-C1-M2DE-C2-M2
Max20.0839.607415.1656.5915
Min18.8428.805814.6266.4609
Average19.6048.993714.9476.5287
STD0.221430.138710.172670.032574
f 5 DE-C1-M1DE-C2-M1DE-C1-M2DE-C2-M2
Max28.15512.56421.2057.9327
Min21.6099.570816.1047.3504
Average23.72810.06516.8417.4283
STD1.30080.576030.820970.080032
f 6 DE-C1-M1DE-C2-M1DE-C1-M2DE-C2-M2
Max20.13410.59215.8657.8398
Min16.4327.532512.4415.7978
Average17.2988.141212.9886.0956
STD0.594290.85430.669040.54322
f 7 DE-C1-M1DE-C2-M1DE-C1-M2DE-C2-M2
Max20.15911.56615.4848.8737
Min19.0548.548215.0426.7808
Average19.5939.039415.3977.171
STD0.331050.709940.0882930.57938
f 8 DE-C1-M1DE-C2-M1DE-C1-M2DE-C2-M2
Max23.64212.88217.5317.8927
Min22.7829.80517.0117.4364
Average23.2310.15917.4137.5943
STD0.191330.449890.137860.069855
Table 10. Statistical results for the value of the objective function using PSO.
Table 10. Statistical results for the value of the objective function using PSO.
f 1 PSO-C1-M1PSO-C2-M1PSO-C1-M2PSO-C2-M2
Max 3.4482 × 10 128 3.4909 × 10 92 2.0141 × 10 127 8.0804 × 10 92
Min 4.0135 × 10 136 1.1576 × 10 99 5.7269 × 10 136 4.0316 × 10 100
Average 2.3691 × 10 129 1.0474 × 10 93 7.6416 × 10 129 2.6889 × 10 93
STD 6.8822 × 10 129 5.0295 × 10 93 3.1639 × 10 128 1.1853 × 10 92
f 2 PSO-C1-M1PSO-C2-M1PSO-C1-M2PSO-C2-M2
Max0.454321.16053.37380.45432
Min 1.4997 × 10 32 1.4997 × 10 32 1.4998 × 10 32 1.4998 × 10 32
Average0.0362120.0449630.187550.023545
STD0.111550.189290.600210.091378
f 3 PSO-C1-M1PSO-C2-M1PSO-C1-M2PSO-C2-M2
Max−335.11−335.11−320.98−335.11
Min−391.66−391.66−391.66−391.66
Average−363.95−371.02−365.37−371.02
STD17.12715.69817.60814.898
f 4 PSO-C1-M1PSO-C2-M1PSO-C1-M2PSO-C2-M2
Max7.7008150.43125.997.773
Min0.0010110.00075109 9.3565 × 10 5 8.2241 × 10 5
Average2.32455.29444.05515.0642
STD2.249421.74917.66617.601
f 5 PSO-C1-M1PSO-C2-M1PSO-C1-M2PSO-C2-M2
Max0.26580.174540.226210.18699
Min0.0172360.00739600.01969
Average0.0821660.0672710.0735520.081583
STD0.0457710.0329320.0460.036983
f 6 PSO-C1-M1PSO-C2-M1PSO-C1-M2PSO-C2-M2
Max13.92912.93415.91915.919
Min01.989900.99496
Average7.18366.5077.99956.6861
STD3.16742.9423.52753.5107
f 7 PSO-C1-M1PSO-C2-M1PSO-C1-M2PSO-C2-M2
Max4.69075.01965.85135.4093
Min0.010729 5.7752 × 10 23 0.010729 1.0224 × 10 23
Average0.894750.723331.20730.74679
STD1.20231.23461.60431.2825
f 8 PSO-C1-M1PSO-C2-M1PSO-C1-M2PSO-C2-M2
Max1.15511.64622.01331.1551
Min 3.1086 × 10 15 3.1086 × 10 15 3.1086 × 10 15 3.1086 × 10 15
Average0.115510.0560270.267840.069309
STD0.350060.281670.557460.27712
Table 11. Statistical results for the execution time using PSO.
Table 11. Statistical results for the execution time using PSO.
f 1 PSO-C1-M1PSO-C2-M1PSO-C1-M2PSO-C2-M2
Max4.64364.78274.35863.9628
Min4.4664.41343.54663.4743
Average4.57144.45243.68053.5202
STD0.0554070.0526520.198250.10977
f 2 PSO-C1-M1PSO-C2-M1PSO-C1-M2PSO-C2-M2
Max23.01623.07218.01117.645
Min22.31622.12616.91116.916
Average22.47922.4617.14217.17
STD0.142350.372720.228870.21337
f 3 PSO-C1-M1PSO-C2-M1PSO-C1-M2PSO-C2-M2
Max10.11510.2377.90327.8065
Min9.91169.95367.31077.3321
Average9.980110.037.38717.4062
STD0.0388990.0457190.14020.12795
f 4 PSO-C1-M1PSO-C2-M1PSO-C1-M2PSO-C2-M2
Max6.78356.38027.13895.6697
Min5.99455.91695.0834.9944
Average6.36266.13975.55495.2585
STD0.172240.0730560.368780.175
f 5 PSO-C1-M1PSO-C2-M1PSO-C1-M2PSO-C2-M2
Max21.21121.3216.80687.8462
Min20.77120.8156.29766.3105
Average20.89321.0376.36046.502
STD0.100990.0631590.0873150.42114
f 6 PSO-C1-M1PSO-C2-M1PSO-C1-M2PSO-C2-M2
Max5.2534.94984.50534.5329
Min4.80524.78524.05164.0585
Average4.85284.82394.10354.1364
STD0.0669580.0365780.0964930.1393
f 7 PSO-C1-M1PSO-C2-M1PSO-C1-M2PSO-C2-M2
Max7.05627.61486.39736.1467
Min6.33526.38725.40165.4081
Average6.57936.57485.74445.6745
STD0.129960.189310.276270.18781
f 8 PSO-C1-M1PSO-C2-M1PSO-C1-M2PSO-C2-M2
Max8.17458.00037.07026.945
Min7.39897.49156.28986.3109
Average7.51517.55446.39696.3699
STD0.137680.0686330.165060.1237
Table 12. General statistical results for the value of the objective function.
Table 12. General statistical results for the value of the objective function.
f 1 GA-C1-M1GA-C2-M1GA-C1-M2GA-C2-M2DE-C1-M1DE-C2-M1DE-C1-M2DE-C2-M2PSO-C1-M1PSO-C2-M1PSO-C1-M2PSO-C2-M2
Max803.1851.1693.48878.87 3.10 × 10 36 2189.2 6.75 × 10 36 3135.8 3.45 × 10 128 3.49 × 10 92 2.01 × 10 127 8.08 × 10 92
Min110.12231.6696.881198.67 1.19 × 10 38 33.487 3.85 × 10 39 5.1453 4.01 × 10 136 1.16 × 10 99 5.73 × 10 136 4.03 × 10 100
Average396.34512.71363.63524.55 4.45 × 10 37 637.3 5.60 × 10 37 738.21 2.37 × 10 129 1.05 × 10 93 7.64 × 10 129 2.69 × 10 93
STD164.16145.59165.86165.07 6.88 × 10 37 508.86 1.18 × 10 36 680.65 6.88 × 10 129 5.03 × 10 93 3.16 × 10 128 1.19 × 10 92
f 2 GA-C1-M1GA-C2-M1GA-C1-M2GA-C2-M2DE-C1-M1DE-C2-M1DE-C1-M2DE-C2-M2PSO-C1-M1PSO-C2-M1PSO-C1-M2PSO-C2-M2
Max4.06822.87564.18862.9012 1.50 × 10 32 5.303 1.50 × 10 32 4.40890.454321.16053.37380.45432
Min0.435320.738580.501670.49925 1.50 × 10 32 0.33872 1.50 × 10 32 0.31488 1.50 × 10 32 1.50 × 10 32 1.50 × 10 32 1.50 × 10 32
Average2.00621.67421.72041.6712 1.50 × 10 32 1.8351 1.50 × 10 32 1.69160.0362120.0449630.187550.023545
STD1.0940.435560.916030.49293 2.76 × 10 48 1.2641 8.29 × 10 48 0.999010.111550.189290.600210.091378
f 3 GA-C1-M1GA-C2-M1GA-C1-M2GA-C2-M2DE-C1-M1DE-C2-M1DE-C1-M2DE-C2-M2PSO-C1-M1PSO-C2-M1PSO-C1-M2PSO-C2-M2
Max−361.86−369.18−360.01−366.04 377.52 −282.82 377.52 −276.36−335.11−335.11−320.98−335.11
Min−385.82−390.21−389.06−385.25 391.66 −362.53 391.66 −379.17 391.66 391.66 391.66 391.66
Average−376.93−378.57−377.46−378.18−390.53−322.34 391.1 −324.16−363.95−371.02−365.37−371.02
STD5.32544.36975.53374.25673.874117.26 2.7983 24.56817.12715.69817.60814.898
f 4 GA-C1-M1GA-C2-M1GA-C1-M2GA-C2-M2DE-C1-M1DE-C2-M1DE-C1-M2DE-C2-M2PSO-C1-M1PSO-C2-M1PSO-C1-M2PSO-C2-M2
Max 7.48 × 10 5 2.67 × 10 5 6.51 × 10 5 2.79 × 10 5 3.9866 5.90 × 10 5 3.9866 2.81 × 10 5 7.7008150.43125.997.773
Min11809226741059214904 0 221.75 1.05 × 10 29 85.70.0010110.00075109 9.36 × 10 5 8.22 × 10 5
Average 2.18 × 10 5 1.28 × 10 5 2.03 × 10 5 1.19 × 10 5 0.3189379546 0.079732 516492.32455.29444.05515.0642
STD 1.81 × 10 5 61355 1.61 × 10 5 597921.0925 1.32 × 10 5 0.56379 710562.249421.74917.66617.601
f 5 GA-C1-M1GA-C2-M1GA-C1-M2GA-C2-M2DE-C1-M1DE-C2-M1DE-C1-M2DE-C2-M2PSO-C1-M1PSO-C2-M1PSO-C1-M2PSO-C2-M2
Max1.00790.905741.04220.952180.478460.906030.545410.85340.2658 0.17454 0.226210.18699
Min0.137950.518980.135180.42260.00985730.11603 0 0.144610.0172360.007396 0 0.01969
Average0.625590.726190.643540.724770.171140.372720.154150.369110.082166 0.067271 0.0735520.081583
STD0.291690.100530.277510.113980.125170.165610.122160.177380.045771 0.032932 0.0460.036983
f 6 GA-C1-M1GA-C2-M1GA-C1-M2GA-C2-M2DE-C1-M1DE-C2-M1DE-C1-M2DE-C2-M2PSO-C1-M1PSO-C2-M1PSO-C1-M2PSO-C2-M2
Max26.96722.36926.16123.73418.43661.33922.18989.49513.929 12.934 15.91915.919
Min9.021211.3516.84127.92411.98997.2842.98497.5864 0 1.9899 0 0.99496
Average18.40417.32316.8717.457.631826.9268.996230.2957.1836 6.507 7.99956.6861
STD4.2545 2.6509 4.13173.26353.521512.0424.460615.2823.16742.9423.52753.5107
f 7 GA-C1-M1GA-C2-M1GA-C1-M2GA-C2-M2DE-C1-M1DE-C2-M1DE-C1-M2DE-C2-M2PSO-C1-M1PSO-C2-M1PSO-C1-M2PSO-C2-M2
Max15.55415.82515.91316.0910.0002531326.502 0.00023235 31.1254.69075.01965.85135.4093
Min5.819.54766.10369.749 4.48 × 10 6 6.2426 2.78 × 10 6 7.35920.010729 5.78 × 10 23 0.010729 1.02 × 10 23
Average9.747113.1079.418413.142 3.18 × 10 5 16.616 4.16 × 10 5 16.4690.894750.723331.20730.74679
STD2.18181.35762.13841.3547 3.89 × 10 5 4.8337 4.69 × 10 5 4.82091.20231.23461.60431.2825
f 8 GA-C1-M1GA-C2-M1GA-C1-M2GA-C2-M2DE-C1-M1DE-C2-M1DE-C1-M2DE-C2-M2PSO-C1-M1PSO-C2-M1PSO-C1-M2PSO-C2-M2
Max3.46245.22113.87395.3901 3.11 × 10 15 15.395 3.11 × 10 15 13.6721.15511.64622.01331.1551
Min0.882823.30811.26553.0742 3.11 × 10 15 1.8371 3.11 × 10 15 2.0741 3.11 × 10 15 3.11 × 10 15 3.11 × 10 15 3.11 × 10 15
Average2.47744.28572.72354.253 3.11 × 10 15 8.5284 3.11 × 10 15 7.98630.115510.0560270.267840.069309
STD0.53490.444250.550140.53566 0 2.8974 0 3.02190.350060.281670.557460.27712
Table 13. General statistical results for the run time.
Table 13. General statistical results for the run time.
f 1 GA-C1-M1GA-C2-M1GA-C1-M2GA-C2-M2DE-C1-M1DE-C2-M1DE-C1-M2DE-C2-M2PSO-C1-M1PSO-C2-M1PSO-C1-M2PSO-C2-M2
Max24.14414.81823.04513.12116.2667.597513.6545.85334.64364.78274.3586 3.9628
Min21.5814.10919.58512.58315.4316.797511.8545.27114.4664.41343.5466 3.4743
Average22.56714.37620.54812.9815.7727.03812.2455.3434.57144.45243.6805 3.5202
STD0.831580.215880.784650.141110.184570.141970.383290.0785260.055407 0.052652 0.198250.10977
f 2 GA-C1-M1GA-C2-M1GA-C1-M2GA-C2-M2DE-C1-M1DE-C2-M1DE-C1-M2DE-C2-M2PSO-C1-M1PSO-C2-M1PSO-C1-M2PSO-C2-M2
Max53.41933.99543.84628.08457.41623.25134.549 15.018 23.01623.07218.01117.645
Min52.532.74642.46927.32548.19320.76333.857 14.653 22.31622.12616.91116.916
Average52.70533.57643.30527.6849.60621.30434.325 14.75 22.47922.4617.14217.17
STD0.193730.456740.402240.298111.86480.587890.20726 0.049601 0.142350.372720.228870.21337
f 3 GA-C1-M1GA-C2-M1GA-C1-M2GA-C2-M2DE-C1-M1DE-C2-M1DE-C1-M2DE-C2-M2PSO-C1-M1PSO-C2-M1PSO-C1-M2PSO-C2-M2
Max33.67521.24127.96116.95127.86912.06818.8018.285610.11510.2377.9032 7.8065
Min32.44420.75327.69316.24926.58611.45118.4928.15479.91169.9536 7.3107 7.3321
Average32.65620.93327.82916.62527.2411.76518.6338.22079.980110.03 7.3871 7.4062
STD0.21890.129870.0511810.197690.239640.134810.068874 0.031299 0.0388990.0457190.14020.12795
f 4 GA-C1-M1GA-C2-M1GA-C1-M2GA-C2-M2DE-C1-M1DE-C2-M1DE-C1-M2DE-C2-M2PSO-C1-M1PSO-C2-M1PSO-C1-M2PSO-C2-M2
Max26.43217.19523.17714.66220.0839.607415.1656.59156.78356.38027.1389 5.6697
Min25.83216.80922.2814.57918.8428.805814.6266.46095.99455.91695.083 4.9944
Average26.1216.98822.80614.62519.6048.993714.9476.52876.36266.13975.5549 5.2585
STD0.125770.106710.3793 0.021441 0.221430.138710.172670.0325740.172240.0730560.368780.175
f 5 GA-C1-M1GA-C2-M1GA-C1-M2GA-C2-M2DE-C1-M1DE-C2-M1DE-C1-M2DE-C2-M2PSO-C1-M1PSO-C2-M1PSO-C1-M2PSO-C2-M2
Max56.46337.64325.43115.98128.15512.56421.2057.932721.21121.321 6.8068 7.8462
Min51.66732.04324.52715.40321.6099.570816.1047.350420.77120.815 6.2976 6.3105
Average52.60932.33625.20715.77723.72810.06516.8417.428320.89321.037 6.3604 6.502
STD0.988710.790330.288780.23081.30080.576030.820970.0800320.10099 0.063159 0.0873150.42114
f 6 GA-C1-M1GA-C2-M1GA-C1-M2GA-C2-M2DE-C1-M1DE-C2-M1DE-C1-M2DE-C2-M2PSO-C1-M1PSO-C2-M1PSO-C1-M2PSO-C2-M2
Max23.76715.58520.96813.32520.13410.59215.8657.83985.2534.9498 4.5053 4.5329
Min22.68614.81820.18212.84316.4327.532512.4415.79784.80524.7852 4.0516 4.0585
Average23.01414.91820.42412.89817.2988.141212.9886.09564.85284.8239 4.1035 4.1364
STD0.221930.138020.300390.0639150.594290.85430.669040.543220.066958 0.036578 0.0964930.1393
f 7 GA-C1-M1GA-C2-M1GA-C1-M2GA-C2-M2DE-C1-M1DE-C2-M1DE-C1-M2DE-C2-M2PSO-C1-M1PSO-C2-M1PSO-C1-M2PSO-C2-M2
Max26.14717.09723.71815.00320.15911.56615.4848.87377.05627.61486.3973 6.1467
Min25.50516.51423.57114.88319.0548.548215.0426.78086.33526.3872 5.4016 5.4081
Average25.67516.60823.66114.94319.5939.039415.3977.1716.57936.57485.7444 5.6745
STD0.11980.0874610.031198 0.023558 0.331050.709940.0882930.579380.129960.189310.276270.18781
f 8 GA-C1-M1GA-C2-M1GA-C1-M2GA-C2-M2DE-C1-M1DE-C2-M1DE-C1-M2DE-C2-M2PSO-C1-M1PSO-C2-M1PSO-C1-M2PSO-C2-M2
Max28.48818.88825.42616.09123.64212.88217.5317.89278.17458.00037.0702 6.945
Min27.717.86225.29415.90422.7829.80517.0117.43647.39897.4915 6.2898 6.3109
Average27.88518.17925.36515.94823.2310.15917.4137.59437.51517.55446.3969 6.3699
STD0.159470.211930.032987 0.029009 0.191330.449890.137860.0698550.137680.0686330.165060.1237
Table 14. Total values for the execution time using a genetic algorithm.
Table 14. Total values for the execution time using a genetic algorithm.
fGA-C1-M1GA-C2-M1GA-C1-M2GA-C2-M2
f 1 1128.4718.811027.4649.02
f 2 2635.31678.82165.21384
f 3 1632.81046.71391.4831.26
f 4 1306849.381140.3731.26
f 5 2630.51616.81260.3788.87
f 6 1150.7745.891021.2644.9
f 7 1283.8830.41183747.14
f 8 1394.2908.951268.2797.4
Total13,162839.610,4576574
Table 15. Total values for the execution time using differential evolution.
Table 15. Total values for the execution time using differential evolution.
fDE-C1-M1DE-C2-M1DE-C1-M2DE-C2-M2
f 1 788.58351.9612.27267.15
f 2 2480.31065.21716.3737.48
f 3 1362588.25931.63411.04
f 4 980.19449.69747.37326.44
f 5 1186.4503.27842.07371.42
f 6 864.88407.06649.41304.78
f 7 979.64451.97769.87358.55
f 8 1161.5507.97870.66379.72
Total9803.54325.37139.63156.6
Table 16. Total values for the execution time using particle swarm optimization.
Table 16. Total values for the execution time using particle swarm optimization.
fPSO-C1-M1PSO-C2-M1PSO-C1-M2PSO-C2-M2
f 1 228.57222.62184.03176.01
f 2 11241123857.1858.52
f 3 499.01501.5369.35370.31
f 4 318.13306.98277.74262.93
f 5 1044.61051.8318.02325.1
f 6 242.64241.19205.18206.82
f 7 328.97328.74287.22283.73
f 8 375.75377.72319.84318.5
Total4161.74153.62818.52801.9
Table 17. Total values for the execution time of the algorithms (in seconds).
Table 17. Total values for the execution time of the algorithms (in seconds).
AlgorithmM1M2DifferencePercentage TG
GA21,55717,0314526 10.28 %
DE14,12910,2963833 8.72 %
PSO831556202695 6.12 %
Total44,00132,94711,054 25.12 %

Share and Cite

MDPI and ACS Style

Gélvez, N.; Espitia, H.; Bayona, J. Testing of a Virtualized Distributed Processing System for the Execution of Bio-Inspired Optimization Algorithms. Symmetry 2020, 12, 1192. https://doi.org/10.3390/sym12071192

AMA Style

Gélvez N, Espitia H, Bayona J. Testing of a Virtualized Distributed Processing System for the Execution of Bio-Inspired Optimization Algorithms. Symmetry. 2020; 12(7):1192. https://doi.org/10.3390/sym12071192

Chicago/Turabian Style

Gélvez, Nancy, Helbert Espitia, and Jhon Bayona. 2020. "Testing of a Virtualized Distributed Processing System for the Execution of Bio-Inspired Optimization Algorithms" Symmetry 12, no. 7: 1192. https://doi.org/10.3390/sym12071192

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop