Topic Editors

Institute of Systems and Robotics (ISR-UC), and Department of Electrical and Computer Engineering (DEEC-UC), University of Coimbra, Pólo II, PT-3030-290 Coimbra, Portugal
Dr. António Pedro Aguiar
Department of Electrical and Computer Engineering, University of Porto, 4099-002 Porto, Portugal
Departament of Electronics, Telecommunications and Informatics (DETI), Institute of Electronics and Informatics Engineering of Aveiro (IEETA), University of Aveiro, 3810-193 Aveiro, Portugal
Institute for Systems and Robotics (ISR/IST), Department of Electrical and Computer Engineering, Instituto Superior Tecnico, University of Lisbon, 1049-001 Lisbon, Portugal
Dr. João Fabro
Departamento Acadêmico de Informática (DAINF), Federal University of Technology-Paraná (UTFPR), Curitiba, Paraná 80000-000, Brazil

Soft Computing

Abstract submission deadline
closed (28 February 2023)
Manuscript submission deadline
closed (30 April 2023)
Viewed by
63658

Topic Information

Dear Colleagues,

Soft computing methodologies, techniques, and algorithms focus on approximate models and aim to provide solutions to complex problems. These algorithms aim to be tolerant of imprecision, uncertainty, partial truth, and approximation. Soft computing is the subject of both theoretical and practical research, and soft computing techniques are currently being applied in many applications in areas such as industrial systems, commercial, or domestic applications.

This Topic is open to receive submissions of high-quality papers regarding advances in soft computing and its applications. The themes of the papers include, but are not limited to, computational intelligence, computational learning, machine learning, intelligent control, fuzzy systems, neural networks, genetic algorithms, ant colony, particle swarm, other evolutionary algorithms, other probabilistic computing, rough sets, hybrid methods, wavelets, expert systems, optimization, modeling, estimation, prediction, simulation, control, big data, robotics, mobile robotics and intelligent vehicles, robot manipulator control, sensing, soft sensors, automation, industrial systems, embedded systems, and real-time systems.

The Topic aims to provide for the rapid dissemination of important research in soft computing technologies. It encourages the integration and cross-fertilization of soft computing techniques and other scientific areas, from both theoretical and practical points of view. It aims to link ideas and techniques from soft computing with other disciplines and with advanced applications.

Application areas include, but are not limited to: robotics, intelligent agents, signal and image processing, computer vision, system monitoring, fault detection and diagnosis, control systems, systems identification and modeling, optimization, process optimization, multi-objective optimization, decision support, autonomous reasoning, manufacturing systems, power systems, energy systems, mechatronics, nano- and microsystems, motion control and power electronics, industrial electronics, time series prediction, human–machine interfaces, virtual reality, intelligent agents, consumer electronics, bio-inspired algorithms, biomedical engineering, agricultural systems and production, data mining, and data visualization.

Prof. Dr. Rui Araújo
Dr. António Pedro Aguiar
Dr. Nuno Lau
Dr. Rodrigo Ventura
Dr. João Fabro
Topic Editors

Keywords

  • soft computing
  • computational intelligence
  • machine learning
  • deep learning
  • intelligent control
  • fuzzy systems
  • neural networks
  • genetic algorithms
  • bio-inspired algorithms
  • expert systems
  • optimization
  • modeling
  • big data
  • data mining
  • natural language processing
  • NLP
  • data visualization
  • robotics
  • robot control
  • intelligent vehicles
  • intelligent agents
  • multi-agent system
  • image processing
  • signal processing
  • speech processing
  • video processing
  • audio processing
  • computer vision
  • fault detection and diagnosis
  • decision support
  • human–machine interfaces
  • embedded systems
  • real-time systems
  • soft sensors, etc.

Participating Journals

Journal Name Impact Factor CiteScore Launched Year First Decision (median) APC
Applied Sciences
applsci
2.7 4.5 2011 16.9 Days CHF 2400
Mathematics
mathematics
2.4 3.5 2013 16.9 Days CHF 2600
Information
information
3.1 5.8 2010 18 Days CHF 1600
Future Internet
futureinternet
3.4 6.7 2009 11.8 Days CHF 1600
Algorithms
algorithms
2.3 3.7 2008 15 Days CHF 1600

Preprints.org is a multidiscipline platform providing preprint service that is dedicated to sharing your research from the start and empowering your research journey.

MDPI Topics is cooperating with Preprints.org and has built a direct connection between MDPI journals and Preprints.org. Authors are encouraged to enjoy the benefits by posting a preprint at Preprints.org prior to publication:

  1. Immediately share your ideas ahead of publication and establish your research priority;
  2. Protect your idea from being stolen with this time-stamped preprint article;
  3. Enhance the exposure and impact of your research;
  4. Receive feedback from your peers in advance;
  5. Have it indexed in Web of Science (Preprint Citation Index), Google Scholar, Crossref, SHARE, PrePubMed, Scilit and Europe PMC.

Published Papers (30 papers)

Order results
Result details
Journals
Select all
Export citation of selected articles as:
16 pages, 504 KiB  
Article
Regularized Mislevy-Wu Model for Handling Nonignorable Missing Item Responses
by Alexander Robitzsch
Information 2023, 14(7), 368; https://doi.org/10.3390/info14070368 - 28 Jun 2023
Viewed by 938
Abstract
Missing item responses are frequently found in educational large-scale assessment studies. In this article, the Mislevy-Wu item response model is applied for handling nonignorable missing item responses. This model allows that the missingness of an item depends on the item itself and a [...] Read more.
Missing item responses are frequently found in educational large-scale assessment studies. In this article, the Mislevy-Wu item response model is applied for handling nonignorable missing item responses. This model allows that the missingness of an item depends on the item itself and a further latent variable. However, with low to moderate amounts of missing item responses, model parameters for the missingness mechanism are difficult to estimate. Hence, regularized estimation using a fused ridge penalty is applied to the Mislevy-Wu model to stabilize estimation. The fused ridge penalty function is separately defined for multiple-choice and constructed response items because previous research indicated that the missingness mechanisms strongly differed for the two item types. In a simulation study, it turned out that regularized estimation improves the stability of item parameter estimation. The method is also illustrated using international data from the progress in international reading literacy study (PIRLS) 2011 data. Full article
(This article belongs to the Topic Soft Computing)
Show Figures

Figure 1

30 pages, 11851 KiB  
Article
Load Forecasting Based on LVMD-DBFCM Load Curve Clustering and the CNN-IVIA-BLSTM Model
by Linjing Hu, Jiachen Wang, Zhaoze Guo and Tengda Zheng
Appl. Sci. 2023, 13(12), 7332; https://doi.org/10.3390/app13127332 - 20 Jun 2023
Cited by 2 | Viewed by 1021
Abstract
Power load forecasting plays an important role in power systems, and the accuracy of load forecasting is of vital importance to power system planning as well as economic efficiency. Power load data are nonsmooth, nonlinear time-series and “noisy” data. Traditional load forecasting has [...] Read more.
Power load forecasting plays an important role in power systems, and the accuracy of load forecasting is of vital importance to power system planning as well as economic efficiency. Power load data are nonsmooth, nonlinear time-series and “noisy” data. Traditional load forecasting has low accuracy and curves not fitting the load variation. It is not well predicted by a single forecasting model. In this paper, we propose a novel model based on the combination of data mining and deep learning to improve the prediction accuracy. First, data preprocessing is performed. Second, identification and correction of anomalous data, normalization of continuous sequences, and one-hot encoding of discrete sequences are performed. The load data are decomposed and denoised using the double decomposition modal (LVMD) strategy, the load curves are clustered using the double weighted fuzzy C-means (DBFCM) algorithm, and the typical curves obtained are used as load patterns. In addition, data feature analysis is performed. A convolutional neural network (CNN) is used to extract data features. A bidirectional long short-term memory (BLSTM) network is used for prediction, in which the number of hidden layer neurons, the number of training epochs, the learning rate, the regularization coefficient, and other relevant parameters in the BLSTM network are optimized using the influenza virus immunity optimization algorithm (IVIA). Finally, the historical data of City H from 1 January 2016 to 31 December 2018, are used for load forecasting. The experimental results show that the novel model based on LVMD-DBFCM load c1urve clustering combined with CNN-IVIA-BLSTM proposed in this paper has an error of only 2% for electric load forecasting. Full article
(This article belongs to the Topic Soft Computing)
Show Figures

Figure 1

25 pages, 370 KiB  
Article
Hermite–Hadamard-Type Inequalities for Coordinated Convex Functions Using Fuzzy Integrals
by Muhammad Amer Latif
Mathematics 2023, 11(11), 2432; https://doi.org/10.3390/math11112432 - 24 May 2023
Cited by 1 | Viewed by 595
Abstract
In this paper, some estimates of third and fourth inequalities in Hermite–Hadamard-type inequalities for coordinated convex functions are proved using the non-additivity of the integrals and Fubini’s theorem for fuzzy integrals. That is, the results are obtained in the fuzzy context and using [...] Read more.
In this paper, some estimates of third and fourth inequalities in Hermite–Hadamard-type inequalities for coordinated convex functions are proved using the non-additivity of the integrals and Fubini’s theorem for fuzzy integrals. That is, the results are obtained in the fuzzy context and using the Lebesgue measure. Several examples are provided on how to evaluate these estimates in order to illustrate the obtained results. Full article
(This article belongs to the Topic Soft Computing)
22 pages, 1812 KiB  
Article
Improved and Provably Secure ECC-Based Two-Factor Remote Authentication Scheme with Session Key Agreement
by Fairuz Shohaimay and Eddie Shahril Ismail
Mathematics 2023, 11(1), 5; https://doi.org/10.3390/math11010005 - 20 Dec 2022
Cited by 3 | Viewed by 1457
Abstract
The remote authentication scheme is a cryptographic protocol incorporated by user–server applications to prevent unauthorized access and security attacks. Recently, a two-factor authentication scheme using hard problems in elliptic curve cryptography (ECC)—the elliptic curve discrete logarithm problem (ECDLP), elliptic curve computational Diffie–Hellman problem [...] Read more.
The remote authentication scheme is a cryptographic protocol incorporated by user–server applications to prevent unauthorized access and security attacks. Recently, a two-factor authentication scheme using hard problems in elliptic curve cryptography (ECC)—the elliptic curve discrete logarithm problem (ECDLP), elliptic curve computational Diffie–Hellman problem (ECCDHP), and elliptic curve factorization problem (ECFP)—was developed, but was unable to address several infeasibility issues while incurring high communication costs. Moreover, previous schemes were shown to be vulnerable to privileged insider attacks. Therefore, this research proposes an improved ECC-based authentication scheme with a session key agreement to rectify the infeasible computations and provide a mechanism for the password change/update phase. The formal security analysis proves that the scheme is provably secure under the random oracle model (ROM) and achieves mutual authentication using BAN logic. Based on the performance analysis, the proposed scheme resists the privileged insider attack and attains all of the security goals while keeping the computational costs lower than other schemes based on the three hard problems. Therefore, the findings suggest the potential applicability of the three hard problems in designing identification and authentication schemes in distributed computer networks. Full article
(This article belongs to the Topic Soft Computing)
Show Figures

Figure 1

22 pages, 3448 KiB  
Article
Population-Based Meta-Heuristic Algorithms for Integrated Batch Manufacturing and Delivery Scheduling Problem
by Yong-Jae Kim and Byung-Soo Kim
Mathematics 2022, 10(21), 4127; https://doi.org/10.3390/math10214127 - 04 Nov 2022
Cited by 5 | Viewed by 1718
Abstract
This paper addresses an integrated scheduling problem of batch manufacturing and delivery processes with a single batch machine and direct-shipping trucks. In the manufacturing process, some jobs in the same family are simultaneously processed as a production batch in a single machine. The [...] Read more.
This paper addresses an integrated scheduling problem of batch manufacturing and delivery processes with a single batch machine and direct-shipping trucks. In the manufacturing process, some jobs in the same family are simultaneously processed as a production batch in a single machine. The batch production time depends only on the family type assigned to the production batch and it is dynamically adjusted by batch deterioration and rate-modifying activities. Each job after the batch manufacturing is reassigned to delivery batches. In the delivery process, each delivery batch is directly shipped to the corresponding customer. The delivery time of delivery batches is determined by the distance between the manufacturing site and customer location. The total volume of jobs in each production or delivery batch must not exceed the machine or truck capacity. The objective function is to minimize the total tardiness of jobs delivered to customers with different due dates. To solve the problem, a mixed-integer linear programming model to find the optimal solution for small problem instances is formulated and meta-heuristic algorithms to find effective solutions for large problem instances are presented. Sensitivity analyses are conducted to find the effect of problem parameters on the manufacturing and delivery time. Full article
(This article belongs to the Topic Soft Computing)
Show Figures

Figure 1

15 pages, 487 KiB  
Article
cpd: An R Package for Complex Pearson Distributions
by María José Olmo-Jiménez, Silverio Vílchez-López and José Rodríguez-Avi
Mathematics 2022, 10(21), 4101; https://doi.org/10.3390/math10214101 - 03 Nov 2022
Viewed by 1161
Abstract
The complex Pearson (CP) distributions are a family of probability models for count data generated by the Gaussian hypergeometric function with complex arguments. The complex triparametric Pearson (CTP) distribution and its biparametric versions, the complex biparametric Pearson (CBP) and the extended biparametric Waring [...] Read more.
The complex Pearson (CP) distributions are a family of probability models for count data generated by the Gaussian hypergeometric function with complex arguments. The complex triparametric Pearson (CTP) distribution and its biparametric versions, the complex biparametric Pearson (CBP) and the extended biparametric Waring (EBW) distributions, belong to this family. They all have explicit expressions of the probability mass function (pmf), probability generating function and moments, so they are easy to handle from a computational point of view. Moreover, the CTP and EBW distributions can model over- and underdispersed count data, whereas the CBP can only handle overdispersed data, but unlike other well-known overdispersed distributions, the overdispersion is not due to an excess of zeros but other low values of the variable. Finally, the EBW distribution allows the variance to be split into three uniquely identifiable components: randomness, liability and proneness. These properties make the CP distributions of interest in the modeling of a great variety of data. For this reason, and for trying to spread their use, we have implemented an R package called cpd that contains the pmf, distribution function, quantile function and random generation for these distributions. In addition, the package contains fitting functions according to the maximum likelihood. This package is available from the Comprehensive R Archive Network (CRAN). In this work, we describe all the functions included in the cpd package, and we illustrate their usage with several examples. Moreover, the release of a plugin in order to use the package from the interface R Commander tries to contribute to the spreading of these models among non-advanced users. Full article
(This article belongs to the Topic Soft Computing)
Show Figures

Figure 1

19 pages, 467 KiB  
Article
A Complementary Dual of Single-Valued Neutrosophic Entropy with Application to MAGDM
by Sonam Sharma and Surender Singh
Mathematics 2022, 10(20), 3726; https://doi.org/10.3390/math10203726 - 11 Oct 2022
Cited by 2 | Viewed by 967
Abstract
A single-valued neutrosophic set (SVNS) is a subcategory of neutrosophic set that is used to represent uncertainty and fuzziness in three tiers, namely truthfulness, indeterminacy, and falsity. The measure of entropy of a SVNS plays an important role to determine the ambiguity in [...] Read more.
A single-valued neutrosophic set (SVNS) is a subcategory of neutrosophic set that is used to represent uncertainty and fuzziness in three tiers, namely truthfulness, indeterminacy, and falsity. The measure of entropy of a SVNS plays an important role to determine the ambiguity in a variety of situations. The knowledge measure is a dual form of entropy and is helpful in certain counterintuitive situations. In this paper, we introduce a knowledge measure for the SVNS and contrast the same with existing measures. The comparative study reveals that the proposed knowledge measure is more effective in modeling the structured linguistic variables. We provide the relations of the proposed knowledge measure with single valued neutrosophic similarity and distance measures. We also investigate the application of the proposed measure in multi-attribute group decision making (MAGDM). The proposed MAGDM model is helpful when the decision makers in the group have varied background and the hiring organization is unable to assign the level of importance or weight to a decision-maker. Full article
(This article belongs to the Topic Soft Computing)
Show Figures

Figure 1

18 pages, 5264 KiB  
Article
Projection Pursuit Multivariate Sampling of Parameter Uncertainty
by Oktay Erten, Fábio P. L. Pereira and Clayton V. Deutsch
Appl. Sci. 2022, 12(19), 9668; https://doi.org/10.3390/app12199668 - 26 Sep 2022
Cited by 2 | Viewed by 1573
Abstract
The efficiency of sampling is a critical concern in Monte Carlo analysis, which is frequently used to assess the effect of the uncertainty of the input variables on the uncertainty of the model outputs. The projection pursuit multivariate transform is proposed as an [...] Read more.
The efficiency of sampling is a critical concern in Monte Carlo analysis, which is frequently used to assess the effect of the uncertainty of the input variables on the uncertainty of the model outputs. The projection pursuit multivariate transform is proposed as an easily applicable tool for improving the efficiency and quality of a sampling design in Monte Carlo analysis. The superiority of the projection pursuit multivariate transform, as a sampling technique, is demonstrated in two synthetic case studies, where the random variables are considered to be uncorrelated and correlated in low (bivariate) and high (five-variate) dimensional sampling spaces. Five sampling techniques including Monte Carlo simulation, classic Latin hypercube sampling, maximin Latin hypercube sampling, Latin hypercube sampling with multidimensional uniformity, and projection pursuit multivariate transform are employed in the simulation studies, considering cases where the sample sizes (n) are small (i.e., 10n100), medium (i.e., 100<n1000), and large (i.e., 1000 < n≤ 10,000). The results of the case studies show that the projection pursuit multivariate transform appears to yield the fewest sampling errors and the best sampling space coverage (or multidimensional uniformity), and that a significant amount of computer effort could be saved by using this technique. Full article
(This article belongs to the Topic Soft Computing)
Show Figures

Figure 1

24 pages, 1611 KiB  
Article
Visibility Adaptation in Ant Colony Optimization for Solving Traveling Salesman Problem
by Abu Saleh Bin Shahadat, M. A. H. Akhand and Md Abdus Samad Kamal
Mathematics 2022, 10(14), 2448; https://doi.org/10.3390/math10142448 - 13 Jul 2022
Cited by 8 | Viewed by 2474
Abstract
Ant Colony Optimization (ACO) is a practical and well-studied bio-inspired algorithm to generate feasible solutions for combinatorial optimization problems such as the Traveling Salesman Problem (TSP). ACO is inspired by the foraging behavior of ants, where an ant selects the next city to [...] Read more.
Ant Colony Optimization (ACO) is a practical and well-studied bio-inspired algorithm to generate feasible solutions for combinatorial optimization problems such as the Traveling Salesman Problem (TSP). ACO is inspired by the foraging behavior of ants, where an ant selects the next city to visit according to the pheromone on the trail and the visibility heuristic (inverse of distance). ACO assigns higher heuristic desirability to the nearest city without considering the issue of returning to the initial city or starting point once all the cities are visited. This study proposes an improved ACO-based method, called ACO with Adaptive Visibility (ACOAV), which intelligently adopts a generalized formula of the visibility heuristic associated with the final destination city. ACOAV uses a new distance metric that includes proximity and eventual destination to select the next city. Including the destination in the metric reduces the tour cost because such adaptation helps to avoid using longer links while returning to the starting city. In addition, partial updates of individual solutions and 3-Opt local search operations are incorporated in the proposed ACOAV. ACOAV is evaluated on a suite of 35 benchmark TSP instances and rigorously compared with ACO. ACOAV generates better solutions for TSPs than ACO, while taking less computational time; such twofold achievements indicate the proficiency of the individual adoption techniques in ACOAV, especially in AV and partial solution update. The performance of ACOAV is also compared with the other ten state-of-the-art bio-inspired methods, including several ACO-based methods. From these evaluations, ACOAV is found as the best one for 29 TSP instances out of 35 instances; among those, optimal solutions have been achieved in 22 instances. Moreover, statistical tests comparing the performance revealed the significance of the proposed ACOAV over the considered bio-inspired methods. Full article
(This article belongs to the Topic Soft Computing)
Show Figures

Figure 1

28 pages, 11108 KiB  
Article
Swarm-Intelligence Optimization Method for Dynamic Optimization Problem
by Rui Liu, Yuanbin Mo, Yanyue Lu, Yucheng Lyu, Yuedong Zhang and Haidong Guo
Mathematics 2022, 10(11), 1803; https://doi.org/10.3390/math10111803 - 25 May 2022
Cited by 8 | Viewed by 2039
Abstract
In recent years, the vigorous rise in computational intelligence has opened up new research ideas for solving chemical dynamic optimization problems, making the application of swarm-intelligence optimization techniques more and more widespread. However, the potential for algorithms with different performances still needs to [...] Read more.
In recent years, the vigorous rise in computational intelligence has opened up new research ideas for solving chemical dynamic optimization problems, making the application of swarm-intelligence optimization techniques more and more widespread. However, the potential for algorithms with different performances still needs to be further investigated in this context. On this premise, this paper puts forward a universal swarm-intelligence dynamic optimization framework, which transforms the infinite-dimensional dynamic optimization problem into the finite-dimensional nonlinear programming problem through control variable parameterization. In order to improve the efficiency and accuracy of dynamic optimization, an improved version of the multi-strategy enhanced sparrow search algorithm is proposed from the application side, including good-point set initialization, hybrid algorithm strategy, Lévy flight mechanism, and Student’s t-distribution model. The resulting augmented algorithm is theoretically tested on ten benchmark functions, and compared with the whale optimization algorithm, marine predators algorithm, harris hawks optimization, social group optimization, and the basic sparrow search algorithm, statistical results verify that the improved algorithm has advantages in most tests. Finally, the six algorithms are further applied to three typical dynamic optimization problems under a universal swarm-intelligence dynamic optimization framework. The proposed algorithm achieves optimal results and has higher accuracy than methods in other references. Full article
(This article belongs to the Topic Soft Computing)
Show Figures

Figure 1

18 pages, 380 KiB  
Article
Harvesting the Aggregate Computing Power of Commodity Computers for Supercomputing Applications
by Dereje Regassa, Heonyoung Yeom and Yongseok Son
Appl. Sci. 2022, 12(10), 5113; https://doi.org/10.3390/app12105113 - 19 May 2022
Cited by 2 | Viewed by 1854
Abstract
Distributed supercomputing is becoming common in different companies and academia. Most of the parallel computing researchers focused on harnessing the power of commodity processors and even internet computers to aggregate their computation powers to solve computationally complex problems. Using flexible commodity cluster computers [...] Read more.
Distributed supercomputing is becoming common in different companies and academia. Most of the parallel computing researchers focused on harnessing the power of commodity processors and even internet computers to aggregate their computation powers to solve computationally complex problems. Using flexible commodity cluster computers for supercomputing workloads over a dedicated supercomputer and expensive high-performance computing (HPC) infrastructure is cost-effective. Its scalable nature can make it better employed to the available organizational resources, which can benefit researchers who aim to conduct numerous repetitive calculations on small to large volumes of data to obtain valid results in a reasonable time. In this paper, we design and implement an HPC-based supercomputing facility from commodity computers at an organizational level to provide two separate implementations for cluster-based supercomputing using Hadoop and Spark-based HPC clusters, primarily for data-intensive jobs and Torque-based clusters for Multiple Instruction Multiple Data (MIMD) workloads. The performance of these clusters is measured through extensive experimentation. With the implementation of the message passing interface, the performance of the Spark and Torque clusters is increased by 16.6% for repetitive applications and by 73.68% for computation-intensive applications with a speedup of 1.79 and 2.47 respectively on the HPDA cluster. We conclude that the specific application or job could be chosen to run based on the computation parameters on the implemented clusters. Full article
(This article belongs to the Topic Soft Computing)
Show Figures

Figure 1

14 pages, 1632 KiB  
Article
Suitability Evaluation of the Lining Form Based on Combination Weighting–Set Pair Analysis
by Chen Xing, Leihua Yao, Yingdong Wang and Zijuan Hu
Appl. Sci. 2022, 12(10), 4896; https://doi.org/10.3390/app12104896 - 12 May 2022
Cited by 3 | Viewed by 1354
Abstract
Aiming at the many uncertain factors in the suitability evaluation of reinforced concrete lining of high-pressure pipelines, the set pair analysis (SPA) theory is used to establish the suitability evaluation model. By summarizing the key influencing factors of typical lining design criteria, five [...] Read more.
Aiming at the many uncertain factors in the suitability evaluation of reinforced concrete lining of high-pressure pipelines, the set pair analysis (SPA) theory is used to establish the suitability evaluation model. By summarizing the key influencing factors of typical lining design criteria, five suitability evaluation indices are determined from three criteria, i.e., the minimum overburden criterion, the minimum principal stress criterion, and the hydraulic fracturing criterion. In order to fully consider the subjective and objective factors, the combination ordered weighted averaging (C-OWA) operator and the criteria importance through intercriteria correlation (CRITIC)-entropy weighting model (EWM) were used to construct a combination weighting method, and the weight coefficients of each index were comprehensively determined. Based on the SPA theory and calculation rules, combined with the lining suitability grading criteria, the five-element connection degree of each index and the comprehensive connection degree of each working point were calculated. In this study, the model is applied to the suitability evaluation of reinforced concrete lining at each drilling point of the high-pressure pipeline of a pumped storage power station (PSPS) in Shanxi Province. The results show that the proposed model consisting of subjective weight and objective weight can effectively avoid the error caused by a single weight method, which improves the evaluation sensitivity and rationality. Full article
(This article belongs to the Topic Soft Computing)
Show Figures

Figure 1

33 pages, 3764 KiB  
Article
Adaptive Differential Evolution Algorithm Based on Fitness Landscape Characteristic
by Liming Zheng and Shiqi Luo
Mathematics 2022, 10(9), 1511; https://doi.org/10.3390/math10091511 - 01 May 2022
Cited by 4 | Viewed by 1820
Abstract
Differential evolution (DE) is a simple, effective, and robust algorithm, which has demonstrated excellent performance in dealing with global optimization problems. However, different search strategies are designed for different fitness landscape conditions to find the optimal solution, and there is not a single [...] Read more.
Differential evolution (DE) is a simple, effective, and robust algorithm, which has demonstrated excellent performance in dealing with global optimization problems. However, different search strategies are designed for different fitness landscape conditions to find the optimal solution, and there is not a single strategy that can be suitable for all fitness landscapes. As a result, developing a strategy to adaptively steer population evolution based on fitness landscape is critical. Motivated by this fact, in this paper, a novel adaptive DE based on fitness landscape (FL-ADE) is proposed, which utilizes the local fitness landscape characteristics in each generation population to (1) adjust the population size adaptively; (2) generate DE/current-to-pcbest mutation strategy. The adaptive mechanism is based on local fitness landscape characteristics of the population and enables to decrease or increase the population size during the search. Due to the adaptive adjustment of population size for different fitness landscapes and evolutionary processes, computational resources can be rationally assigned at different evolutionary stages to satisfy diverse requirements of different fitness landscapes. Besides, the DE/current-to-pcbest mutation strategy, which randomly chooses one of the top p% individuals from the archive cbest of local optimal individuals to be the pcbest, is also an adaptive strategy based on fitness landscape characteristic. Using the individuals that are approximated as local optimums increases the algorithm’s ability to explore complex multimodal functions and avoids stagnation due to the use of individuals with good fitness values. Experiments are conducted on CEC2014 benchmark test suit to demonstrate the performance of the proposed FL-ADE algorithm, and the results show that the proposed FL-ADE algorithm performs better than the other seven highly performing state-of-art DE variants, even the winner of the CEC2014 and CEC2017. In addition, the effectiveness of the adaptive population mechanism and DE/current-to-pcbest mutation strategy based on landscape fitness proposed in this paper are respectively verified. Full article
(This article belongs to the Topic Soft Computing)
Show Figures

Figure 1

17 pages, 2790 KiB  
Article
Deriving Situation-Adaptive Policy for Container Stacking in an Automated Container Terminal
by Taekwang Kim and Kwang Ryel Ryu
Appl. Sci. 2022, 12(8), 3892; https://doi.org/10.3390/app12083892 - 12 Apr 2022
Cited by 1 | Viewed by 1964
Abstract
Determining where to stack the containers at the storage yard of a container terminal is an important problem because that decision critically affects the efficiency of container handling in the yard and, eventually, the efficiency of the vessel operations, which is considered the [...] Read more.
Determining where to stack the containers at the storage yard of a container terminal is an important problem because that decision critically affects the efficiency of container handling in the yard and, eventually, the efficiency of the vessel operations, which is considered the most important for the productivity of the whole terminal. One limitation of the stacking policies previously proposed is that they are static in nature. Although good locations for stacking may change as the workload of vessel operation changes, the previous policies are insensitive to such changes. Failure to recommend good locations leads to elongated operations of yard cranes and thus makes it hard for them to keep up with the workload of vessel operation. In this paper, we propose a method for deriving a dynamic policy that can adapt to the workload of vessel operation that changes over time. Our method derives two boundary policies: one for very high workload and the other for very low. Then, a policy appropriate for any intermediate workload can be synthesized from the two boundary policies through interpolation. Simulation experiments showed that the proposed policy significantly reduced overall container handling time compared to the previous static policy. When measured in terms of the time the transportation vehicles wait for container handling services, the improvement was approximately 19%. Full article
(This article belongs to the Topic Soft Computing)
Show Figures

Figure 1

24 pages, 1612 KiB  
Article
IORand: A Procedural Videogame Level Generator Based on a Hybrid PCG Algorithm
by Marco A. Moreno-Armendáriz, Hiram Calvo, José A. Torres-León and Carlos A. Duchanoy
Appl. Sci. 2022, 12(8), 3792; https://doi.org/10.3390/app12083792 - 09 Apr 2022
Cited by 1 | Viewed by 2947
Abstract
In this work we present the intelligent orchestrator of random generators (IORand), a hybrid procedural content generation (PCG) algorithm, driven by game experience, based on reinforcement learning and semi-random content generation methods. Our study includes a presentation of current PCG techniques and why [...] Read more.
In this work we present the intelligent orchestrator of random generators (IORand), a hybrid procedural content generation (PCG) algorithm, driven by game experience, based on reinforcement learning and semi-random content generation methods. Our study includes a presentation of current PCG techniques and why a hybridization of approaches has become a new trend with promising results in the area. Moreover, the design of a new method for evaluating video game levels is presented, aimed at evaluating game experiences, based on graphs, which allows identifying the type of interaction that the player will have with the level. Then, the design of our hybrid PCG algorithm, IORand, whose reward function is based on the proposed level evaluation method, is presented. Finally, a study was conducted on the performance of our algorithm to generate levels of three different game experiences, from which we demonstrate the ability of IORand to satisfactorily and consistently solve the generation of levels that provide specific game experiences. Full article
(This article belongs to the Topic Soft Computing)
Show Figures

Figure 1

32 pages, 8536 KiB  
Article
A Dimension Group-Based Comprehensive Elite Learning Swarm Optimizer for Large-Scale Optimization
by Qiang Yang, Kai-Xuan Zhang, Xu-Dong Gao, Dong-Dong Xu, Zhen-Yu Lu, Sang-Woon Jeon and Jun Zhang
Mathematics 2022, 10(7), 1072; https://doi.org/10.3390/math10071072 - 26 Mar 2022
Cited by 17 | Viewed by 1851
Abstract
High-dimensional optimization problems are more and more common in the era of big data and the Internet of things (IoT), which seriously challenge the optimization performance of existing optimizers. To solve these kinds of problems effectively, this paper devises a dimension group-based comprehensive [...] Read more.
High-dimensional optimization problems are more and more common in the era of big data and the Internet of things (IoT), which seriously challenge the optimization performance of existing optimizers. To solve these kinds of problems effectively, this paper devises a dimension group-based comprehensive elite learning swarm optimizer (DGCELSO) by integrating valuable evolutionary information in different elite particles in the swarm to guide the updating of inferior ones. Specifically, the swarm is first separated into two exclusive sets, namely the elite set (ES) containing the top best individuals, and the non-elite set (NES), consisting of the remaining individuals. Then, the dimensions of each particle in NES are randomly divided into several groups with equal sizes. Subsequently, each dimension group of each non-elite particle is guided by two different elites randomly selected from ES. In this way, each non-elite particle in NES is comprehensively guided by multiple elite particles in ES. Therefore, not only could high diversity be maintained, but fast convergence is also likely guaranteed. To alleviate the sensitivity of DGCELSO to the associated parameters, we further devise dynamic adjustment strategies to change the parameter settings during the evolution. With the above mechanisms, DGCELSO is expected to explore and exploit the solution space properly to find the optimum solutions for optimization problems. Extensive experiments conducted on two commonly used large-scale benchmark problem sets demonstrate that DGCELSO achieves highly competitive or even much better performance than several state-of-the-art large-scale optimizers. Full article
(This article belongs to the Topic Soft Computing)
Show Figures

Figure 1

39 pages, 921 KiB  
Article
Stochastic Triad Topology Based Particle Swarm Optimization for Global Numerical Optimization
by Qiang Yang, Yu-Wei Bian, Xu-Dong Gao, Dong-Dong Xu, Zhen-Yu Lu, Sang-Woon Jeon and Jun Zhang
Mathematics 2022, 10(7), 1032; https://doi.org/10.3390/math10071032 - 24 Mar 2022
Cited by 21 | Viewed by 2114
Abstract
Particle swarm optimization (PSO) has exhibited well-known feasibility in problem optimization. However, its optimization performance still encounters challenges when confronted with complicated optimization problems with many local areas. In PSO, the interaction among particles and utilization of the communication information play crucial roles [...] Read more.
Particle swarm optimization (PSO) has exhibited well-known feasibility in problem optimization. However, its optimization performance still encounters challenges when confronted with complicated optimization problems with many local areas. In PSO, the interaction among particles and utilization of the communication information play crucial roles in improving the learning effectiveness and learning diversity of particles. To promote the communication effectiveness among particles, this paper proposes a stochastic triad topology to allow each particle to communicate with two random ones in the swarm via their personal best positions. Then, unlike existing studies that employ the personal best positions of the updated particle and the neighboring best position of the topology to direct its update, this paper adopts the best one and the mean position of the three personal best positions in the associated triad topology as the two guiding exemplars to direct the update of each particle. To further promote the interaction diversity among particles, an archive is maintained to store the obsolete personal best positions of particles and is then used to interact with particles in the triad topology. To enhance the chance of escaping from local regions, a random restart strategy is probabilistically triggered to introduce initialized solutions to the archive. To alleviate sensitivity to parameters, dynamic adjustment strategies are designed to dynamically adjust the associated parameter settings during the evolution. Integrating the above mechanism, a stochastic triad topology-based PSO (STTPSO) is developed to effectively search complex solution space. With the above techniques, the learning diversity and learning effectiveness of particles are largely promoted and thus the developed STTPSO is expected to explore and exploit the solution space appropriately to find high-quality solutions. Extensive experiments conducted on the commonly used CEC 2017 benchmark problem set with different dimension sizes substantiate that the proposed STTPSO achieves highly competitive or even much better performance than state-of-the-art and representative PSO variants. Full article
(This article belongs to the Topic Soft Computing)
Show Figures

Figure 1

17 pages, 1955 KiB  
Article
Dealing with Uncertainty in the MRCPSP/Max Using Discrete Differential Evolution and Entropy
by Angela Hsiang-Ling Chen, Yun-Chia Liang and José David Padilla
Appl. Sci. 2022, 12(6), 3049; https://doi.org/10.3390/app12063049 - 16 Mar 2022
Viewed by 1508
Abstract
In this paper, we investigate the characterization of MRCPSP/max under uncertainty conditions and emphasize managerial ability to recognize and handle positively disruptive events. This proposition is then demonstrated using the entropy approach to find disruptive events and response time intervals. The problem is [...] Read more.
In this paper, we investigate the characterization of MRCPSP/max under uncertainty conditions and emphasize managerial ability to recognize and handle positively disruptive events. This proposition is then demonstrated using the entropy approach to find disruptive events and response time intervals. The problem is solved using a resilient characteristic of the three-stage procedure gauged by schedule robustness and adaptivity; the resulting schedule absorbs the impact of an unexpected event without rescheduling during execution. The use of the differential evolution algorithm, known as DDE, in a discrete manner is proposed and evaluated against the best known optima (BKO). Our findings indicate the DDE is effective overall; moreover, compared against the BKO for every stage, the most significant difference is that the stability of the solutions provided by DDE under the three-stage framework proves to be sufficiently robust when practitioners add response times at certain range levels, in this case from 8% to 15%. Full article
(This article belongs to the Topic Soft Computing)
Show Figures

Figure 1

40 pages, 7605 KiB  
Article
Techno-Economic and Environmental Analysis of Grid-Connected Electric Vehicle Charging Station Using AI-Based Algorithm
by Mohd Bilal, Ibrahim Alsaidan, Muhannad Alaraj, Fahad M. Almasoudi and Mohammad Rizwan
Mathematics 2022, 10(6), 924; https://doi.org/10.3390/math10060924 - 14 Mar 2022
Cited by 24 | Viewed by 3469
Abstract
The rapid growth of electric vehicles in India necessitates more power to energize such vehicles. Furthermore, the transport industry emits greenhouse gases, particularly SO2, CO2. The national grid has to supply an enormous amount of power on a daily [...] Read more.
The rapid growth of electric vehicles in India necessitates more power to energize such vehicles. Furthermore, the transport industry emits greenhouse gases, particularly SO2, CO2. The national grid has to supply an enormous amount of power on a daily basis due to the surplus power required to charge these electric vehicles. This paper presents the various hybrid energy system configurations to meet the power requirements of the electric vehicle charging station (EVCS) situated in the northwest region of Delhi, India. The three configurations are: (a) solar photovoltaic/diesel generator/battery-based EVCS, (b) solar photovoltaic/battery-based EVCS, and (c) grid-and-solar photovoltaic-based EVCS. The meta-heuristic techniques are implemented to analyze the technological, financial, and environmental feasibility of the three possible configurations. The optimization algorithm intends to reduce the total net present cost and levelized cost of energy while keeping the value of lack of power supply probability within limits. To confirm the solution quality obtained using modified salp swarm algorithm (MSSA), the popularly used HOMER software, salp swarm algorithm (SSA), and the gray wolf optimization are applied to the same problem, and their outcomes are equated to those attained by the MSSA. MSSA exhibits superior accuracy and robustness based on simulation outcomes. The MSSA performs much better in terms of computation time followed by the SSA and gray wolf optimization. MSSA results in reduced levelized cost of energy values in all three configurations, i.e., USD 0.482/kWh, USD 0.684/kWh, and USD 0.119/kWh in configurations 1, 2, and 3, respectively. Our findings will be useful for researchers in determining the best method for the sizing of energy system components. Full article
(This article belongs to the Topic Soft Computing)
Show Figures

Figure 1

12 pages, 916 KiB  
Article
Detection of Insulators on Power Transmission Line Based on an Improved Faster Region-Convolutional Neural Network
by Haijian Hu, Yicen Liu and Haina Rong
Algorithms 2022, 15(3), 83; https://doi.org/10.3390/a15030083 - 01 Mar 2022
Cited by 4 | Viewed by 2563
Abstract
Detecting insulators on a power transmission line is of great importance for the safe operation of power systems. Aiming at the problem of the missed detection and misjudgment of the original feature extraction network VGG16 of a faster region-convolutional neural network (R-CNN) in [...] Read more.
Detecting insulators on a power transmission line is of great importance for the safe operation of power systems. Aiming at the problem of the missed detection and misjudgment of the original feature extraction network VGG16 of a faster region-convolutional neural network (R-CNN) in the face of insulators of different sizes, in order to improve the accuracy of insulators’ detection on power transmission lines, an improved faster R-CNN algorithm is proposed. The improved algorithm replaces the original backbone feature extraction network VGG16 in faster R-CNN with the Resnet50 network with deeper layers and a more complex structure, adding an efficient channel attention module based on the channel attention mechanism. Experimental results show that the feature extraction performance has been effectively improved through the improvement of the backbone feature extraction network. The network model is trained on a training set consisting of 6174 insulator pictures, and is tested on a testing set consisting of 686 pictures. Compared with the traditional faster R-CNN, the mean average precision of the improved faster R-CNN increases to 89.37%, with an improvement of 1.63%. Full article
(This article belongs to the Topic Soft Computing)
Show Figures

Figure 1

34 pages, 382 KiB  
Article
Stochastic Cognitive Dominance Leading Particle Swarm Optimization for Multimodal Problems
by Qiang Yang, Litao Hua, Xudong Gao, Dongdong Xu, Zhenyu Lu, Sang-Woon Jeon and Jun Zhang
Mathematics 2022, 10(5), 761; https://doi.org/10.3390/math10050761 - 27 Feb 2022
Cited by 25 | Viewed by 2069
Abstract
Optimization problems become increasingly complicated in the era of big data and Internet of Things, which significantly challenges the effectiveness and efficiency of existing optimization methods. To effectively solve this kind of problems, this paper puts forward a stochastic cognitive dominance leading particle [...] Read more.
Optimization problems become increasingly complicated in the era of big data and Internet of Things, which significantly challenges the effectiveness and efficiency of existing optimization methods. To effectively solve this kind of problems, this paper puts forward a stochastic cognitive dominance leading particle swarm optimization algorithm (SCDLPSO). Specifically, for each particle, two personal cognitive best positions are first randomly selected from those of all particles. Then, only when the cognitive best position of the particle is dominated by at least one of the two selected ones, this particle is updated by cognitively learning from the better personal positions; otherwise, this particle is not updated and directly enters the next generation. With this stochastic cognitive dominance leading mechanism, it is expected that the learning diversity and the learning efficiency of particles in the proposed optimizer could be promoted, and thus the optimizer is expected to explore and exploit the solution space properly. At last, extensive experiments are conducted on a widely acknowledged benchmark problem set with different dimension sizes to evaluate the effectiveness of the proposed SCDLPSO. Experimental results demonstrate that the devised optimizer achieves highly competitive or even much better performance than several state-of-the-art PSO variants. Full article
(This article belongs to the Topic Soft Computing)
14 pages, 1207 KiB  
Article
Learning Static-Adaptive Graphs for RGB-T Image Saliency Detection
by Zhengmei Xu, Jin Tang, Aiwu Zhou and Huaming Liu
Information 2022, 13(2), 84; https://doi.org/10.3390/info13020084 - 12 Feb 2022
Viewed by 1937
Abstract
Many works have been proposed on image saliency detection to handle challenging issues including low illumination, cluttered background, low contrast, and so on. Although good performance has been achieved by these algorithms, detection results are still poor based on RGB modality. Inspired by [...] Read more.
Many works have been proposed on image saliency detection to handle challenging issues including low illumination, cluttered background, low contrast, and so on. Although good performance has been achieved by these algorithms, detection results are still poor based on RGB modality. Inspired by the recent progress of multi-modality fusion, we propose a novel RGB-thermal saliency detection algorithm through learning static-adaptive graphs. Specifically, we first extract superpixels from the two modalities and calculate their affinity matrix. Then, we learn the affinity matrix dynamically and construct a static-adaptive graph. Finally, the saliency maps can be obtained by a two-stage ranking algorithm. Our method is evaluated on RGBT-Saliency Dataset with eleven kinds of challenging subsets. Experimental results show that the proposed method has better generalization performance. The complementary benefits of RGB and thermal images and the more robust feature expression of learning static-adaptive graphs create an effective way to improve the detection effectiveness of image saliency in complex scenes. Full article
(This article belongs to the Topic Soft Computing)
Show Figures

Figure 1

13 pages, 10819 KiB  
Article
Security Graphics with Multilayered Elements in the Near-Infrared and Visible Spectrum
by Jana Žiljak Gršić, Denis Jurečić, Lidija Tepeš Golubić and Silvio Plehati
Information 2022, 13(2), 47; https://doi.org/10.3390/info13020047 - 20 Jan 2022
Cited by 3 | Viewed by 2002
Abstract
In this paper, the fusion of four graphics into one integrated graphic is selectively observed in the visible and infrared spectrum. Each graphic represents its own information derived from the following sources: vector graphics, drawing, photograph and textual information. One graphic will be [...] Read more.
In this paper, the fusion of four graphics into one integrated graphic is selectively observed in the visible and infrared spectrum. Each graphic represents its own information derived from the following sources: vector graphics, drawing, photograph and textual information. One graphic will be visible to the naked eye after the print. The other graphics will be observed with an NIR surveillance camera. These other graphics are nested into the selected visible graphics. All the graphics together make up a security print product with the characteristics of an individual solution with multilayered elements. Reprinting is possible only for the person in possession of the solution created according to the algorithm based on the INFRAREDESIGN® method. When these graphics are printed on paper, it is impossible to produce an identical graphic prepress (C, M, Y, K) to produce forgery with the same dual properties in the visible and NIR spectrum. Full article
(This article belongs to the Topic Soft Computing)
Show Figures

Figure 1

16 pages, 10880 KiB  
Article
Procedural Video Game Scene Generation by Genetic and Neutrosophic WASPAS Algorithms
by Aurimas Petrovas and Romualdas Bausys
Appl. Sci. 2022, 12(2), 772; https://doi.org/10.3390/app12020772 - 13 Jan 2022
Cited by 9 | Viewed by 2678
Abstract
The demand for automated game development assistance tools can be fulfilled by computational creativity algorithms. The procedural generation is one of the topics for creative content development. The main procedural generation challenge for game level layout is how to create a diverse set [...] Read more.
The demand for automated game development assistance tools can be fulfilled by computational creativity algorithms. The procedural generation is one of the topics for creative content development. The main procedural generation challenge for game level layout is how to create a diverse set of levels that could match a human-crafted game scene. Our game scene layouts are created randomly and then sculpted using a genetic algorithm. To address the issue of fitness calculation with conflicting criteria, we use weighted aggregated sum product assessment (WASPAS) in a single-valued neutrosophic set environment (SVNS) that models the indeterminacy with truth, intermediacy, and falsehood memberships. Results are presented as an encoded game object grid where each game object type has a specific function. The algorithm creates a diverse set of game scene layouts by combining game rules validation and aesthetic principles. It successfully creates functional aesthetic patterns without specifically defining the shapes of the combination of games’ objects. Full article
(This article belongs to the Topic Soft Computing)
Show Figures

Figure 1

33 pages, 2864 KiB  
Article
Exploring Bidirectional Performance of Hotel Attributes through Online Reviews Based on Sentiment Analysis and Kano-IPA Model
by Yanyan Chen, Yumei Zhong, Sumin Yu, Yan Xiao and Sining Chen
Appl. Sci. 2022, 12(2), 692; https://doi.org/10.3390/app12020692 - 11 Jan 2022
Cited by 13 | Viewed by 3723
Abstract
As people increasingly make hotel booking decisions relying on online reviews, how to effectively improve customer ratings has become a major point for hotel managers. Online reviews serve as a promising data source to enhance service attributes in order to improve online bookings. [...] Read more.
As people increasingly make hotel booking decisions relying on online reviews, how to effectively improve customer ratings has become a major point for hotel managers. Online reviews serve as a promising data source to enhance service attributes in order to improve online bookings. This paper employs online customer ratings and textual reviews to explore the bidirectional performance (good performance in positive reviews and poor performance in negative reviews) of hotel attributes in terms of four hotel star ratings. Sentiment analysis and a combination of the Kano model and importance-performance analysis (IPA) are applied. Feature extraction and sentiment analysis techniques are used to analyze the bidirectional performance of hotel attributes in terms of four hotel star ratings from 1,090,341 online reviews of hotels in London collected from TripAdvisor.com (accessed on 4 January 2022). In particular, a new sentiment lexicon for hospitality domain is built from numerous online reviews using the PolarityRank algorithm to convert textual reviews into sentiment scores. The Kano-IPA model is applied to explain customers’ rating behaviors and prioritize attributes for improvement. The results provide determinants of high/low customer ratings to different star hotels and suggest that hotel attributes contributing to high/low customer ratings vary across hotel star ratings. In addition, this paper analyzed the Kano categories and priority rankings of six hotel attributes for each star rating of hotels to formulate improvement strategies. Theoretical and practical implications of these results are discussed in the end. Full article
(This article belongs to the Topic Soft Computing)
Show Figures

Figure 1

6 pages, 248 KiB  
Article
A Note on “Wiener Index of a Fuzzy Graph and Application to Illegal Immigration Networks”
by Hoon Lee, Xue-gang Chen and Moo Young Sohn
Appl. Sci. 2022, 12(1), 304; https://doi.org/10.3390/app12010304 - 29 Dec 2021
Cited by 1 | Viewed by 1315
Abstract
Connectivity parameters have an important role in the study of communication networks. Wiener index is such a parameter with several applications in networking, facility location, cryptology, chemistry, and molecular biology, etc. In this paper, we show two notes related to the Wiener index [...] Read more.
Connectivity parameters have an important role in the study of communication networks. Wiener index is such a parameter with several applications in networking, facility location, cryptology, chemistry, and molecular biology, etc. In this paper, we show two notes related to the Wiener index of a fuzzy graph. First, we argue that Theorem 3.10 in the paper “Wiener index of a fuzzy graph and application to illegal immigration networks, Fuzzy Sets and Syst. 384 (2020) 132–147” is not correct. We give a correct statement of Theorem 3.10. Second, by using a new operator on matrix, we propose a simple and polynomial-time algorithm to compute the Wiener index of a fuzzy graph. Full article
(This article belongs to the Topic Soft Computing)
Show Figures

Figure 1

10 pages, 2327 KiB  
Article
Improving the Energy Efficiency of an Existing Building by Dynamic Numerical Simulation
by Lelia Letitia Popescu, Razvan Stefan Popescu and Tiberiu Catalina
Appl. Sci. 2021, 11(24), 12150; https://doi.org/10.3390/app112412150 - 20 Dec 2021
Cited by 5 | Viewed by 1971
Abstract
Nowadays, the enhancement of the existing building stock energy performance is a priority. To promote building energy renovation, the European Committee asks Member States to define retrofit strategies, finding cost-effective solutions. This research aims to investigate the relationship between the initial characteristics of [...] Read more.
Nowadays, the enhancement of the existing building stock energy performance is a priority. To promote building energy renovation, the European Committee asks Member States to define retrofit strategies, finding cost-effective solutions. This research aims to investigate the relationship between the initial characteristics of an existing residential buildings and different types of retrofit solutions in terms of final/primary energy consumption and CO2 emissions. A multi-objective optimization has been carried out using experimental data in DesignBuilder dynamic simulation tool. Full article
(This article belongs to the Topic Soft Computing)
Show Figures

Figure 1

38 pages, 1097 KiB  
Article
An Adaptive Covariance Scaling Estimation of Distribution Algorithm
by Qiang Yang, Yong Li, Xu-Dong Gao, Yuan-Yuan Ma, Zhen-Yu Lu, Sang-Woon Jeon and Jun Zhang
Mathematics 2021, 9(24), 3207; https://doi.org/10.3390/math9243207 - 11 Dec 2021
Cited by 26 | Viewed by 2556
Abstract
Optimization problems are ubiquitous in every field, and they are becoming more and more complex, which greatly challenges the effectiveness of existing optimization methods. To solve the increasingly complicated optimization problems with high effectiveness, this paper proposes an adaptive covariance scaling estimation of [...] Read more.
Optimization problems are ubiquitous in every field, and they are becoming more and more complex, which greatly challenges the effectiveness of existing optimization methods. To solve the increasingly complicated optimization problems with high effectiveness, this paper proposes an adaptive covariance scaling estimation of distribution algorithm (ACSEDA) based on the Gaussian distribution model. Unlike traditional EDAs, which estimate the covariance and the mean vector, based on the same selected promising individuals, ACSEDA calculates the covariance according to an enlarged number of promising individuals (compared with those for the mean vector). To alleviate the sensitivity of the parameters in promising individual selections, this paper further devises an adaptive promising individual selection strategy for the estimation of the mean vector and an adaptive covariance scaling strategy for the covariance estimation. These two adaptive strategies dynamically adjust the associated numbers of promising individuals as the evolution continues. In addition, we further devise a cross-generation individual selection strategy for the parent population, used to estimate the probability distribution by combing the sampled offspring in the last generation and the one in the current generation. With the above mechanisms, ACSEDA is expected to compromise intensification and diversification of the search process to explore and exploit the solution space and thus could achieve promising performance. To verify the effectiveness of ACSEDA, extensive experiments are conducted on 30 widely used benchmark optimization problems with different dimension sizes. Experimental results demonstrate that the proposed ACSEDA presents significant superiority to several state-of-the-art EDA variants, and it preserves good scalability in solving optimization problems. Full article
(This article belongs to the Topic Soft Computing)
Show Figures

Figure 1

19 pages, 5107 KiB  
Article
Application of Hierarchical Agglomerative Clustering (HAC) for Systemic Classification of Pop-Up Housing (PUH) Environments
by Thomas Märzinger, Jan Kotík and Christoph Pfeifer
Appl. Sci. 2021, 11(23), 11122; https://doi.org/10.3390/app112311122 - 24 Nov 2021
Cited by 6 | Viewed by 2937
Abstract
This paper is the result of the first-phase, inter-disciplinary work of a multi-disciplinary research project (“Urban pop-up housing environments and their potential as local innovation systems”) consisting of energy engineers and waste managers, landscape architects and spatial planners, innovation researchers and technology assessors. [...] Read more.
This paper is the result of the first-phase, inter-disciplinary work of a multi-disciplinary research project (“Urban pop-up housing environments and their potential as local innovation systems”) consisting of energy engineers and waste managers, landscape architects and spatial planners, innovation researchers and technology assessors. The project is aiming at globally analyzing and describing existing pop-up housings (PUH), developing modeling and assessment tools for sustainable, energy-efficient and socially innovative temporary housing solutions (THS), especially for sustainable and resilient urban structures. The present paper presents an effective application of hierarchical agglomerative clustering (HAC) for analyses of large datasets typically derived from field studies. As can be shown, the method, although well-known and successfully established in (soft) computing science, can also be used very constructively as a potential urban planning tool. The main aim of the underlying multi-disciplinary research project was to deeply analyze and structure THS and PUE. Multiple aspects are to be considered when it comes to the characterization and classification of such environments. A thorough (global) web survey of PUH and analysis of scientific literature concerning descriptive work of PUH and THS has been performed. Moreover, out of several tested different approaches and methods for classifying PUH, hierarchical clustering algorithms functioned well when properly selected metrics and cut-off criteria were applied. To be specific, the ‘Minkowski’-metric and the ‘Calinski-Harabasz’-criteria, as clustering indices, have shown the best overall results in clustering the inhomogeneous data concerning PUH. Several additional algorithms/functions derived from the field of hierarchical clustering have also been tested to exploit their potential in interpreting and graphically analyzing particular structures and dependencies in the resulting clusters. Hereby, (math.) the significance ‘S’ and (math.) proportion ‘P’ have been concluded to yield the best interpretable and comprehensible results when it comes to analyzing the given set (objects n = 85) of researched PUH-objects together with their properties (n > 190). The resulting easily readable graphs clearly demonstrate the applicability and usability of hierarchical clustering- and their derivative algorithms for scientifically profound building classification tasks in Urban Planning by effectively managing huge inhomogeneous building datasets. Full article
(This article belongs to the Topic Soft Computing)
Show Figures

Figure 1

21 pages, 4611 KiB  
Article
Path Planning of a Mechanical Arm Based on an Improved Artificial Potential Field and a Rapid Expansion Random Tree Hybrid Algorithm
by Qingni Yuan, Junhui Yi, Ruitong Sun and Huan Bai
Algorithms 2021, 14(11), 321; https://doi.org/10.3390/a14110321 - 01 Nov 2021
Cited by 9 | Viewed by 2941
Abstract
To improve the path planning efficiency of a robotic arm in three-dimensional space and improve the obstacle avoidance ability, this paper proposes an improved artificial potential field and rapid expansion random tree (APF-RRT) hybrid algorithm for the mechanical arm path planning method. The [...] Read more.
To improve the path planning efficiency of a robotic arm in three-dimensional space and improve the obstacle avoidance ability, this paper proposes an improved artificial potential field and rapid expansion random tree (APF-RRT) hybrid algorithm for the mechanical arm path planning method. The improved APF algorithm (I-APF) introduces a heuristic method based on the number of adjacent obstacles to escape from local minima, which solves the local minimum problem of the APF method and improves the search speed. The improved RRT algorithm (I-RRT) changes the selection method of the nearest neighbor node by introducing a triangular nearest neighbor node selection method, adopts an adaptive step and generates a virtual new node strategy to explore the path, and removes redundant path nodes generated by the RRT algorithm, which effectively improves the obstacle avoidance ability and efficiency of the algorithm. Bezier curves are used to fit the final generated path. Finally, an experimental analysis based on Python shows that the search time of the hybrid algorithm in a multi-obstacle environment is reduced to 2.8 s from 37.8 s (classic RRT algorithm), 10.1 s (RRT* algorithm), and 7.4 s (P_RRT* algorithm), and the success rate and efficiency of the search are both significantly improved. Furthermore, the hybrid algorithm is simulated in a robot operating system (ROS) using the UR5 mechanical arm, and the results prove the effectiveness and reliability of the hybrid algorithm. Full article
(This article belongs to the Topic Soft Computing)
Show Figures

Figure 1

Back to TopTop