Numerical and Evolutionary Optimization 2021

A special issue of Mathematical and Computational Applications (ISSN 2297-8747). This special issue belongs to the section "Engineering".

Deadline for manuscript submissions: closed (31 December 2021) | Viewed by 37783

Printed Edition Available!
A printed edition of this Special Issue is available here.

Special Issue Editors

Computer Science Department, Cinvestav, Av. IPN 2508, Mexico City 07360, Mexico
Interests: computer vision; optimization; metaheuristics
Special Issues, Collections and Topics in MDPI journals
Instituto Politécnico Nacional ESFM-IPN, Mexico City 07730, Mexico
Interests: multi-objective optimization; optimization; evolutionary computation; mathematical programming; memetic algorithms
Special Issues, Collections and Topics in MDPI journals
Departamento de Ingeniería en Electrónica y Eléctrica, Instituto Tecnológico de Tijuana, Calzada Tecnológico SN, Tomas Aquino, Tijuana 22414, Mexico
Interests: evolutionary computation; machine learning; data science; computer vision
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues, 

This Special Issue will mainly consist of selected papers presented at the 9th International Workshop on Numerical and Evolutionary Optimization (NEO 2021, see http://neo.cinvestav.mx for detailed information). However, other works that fit within the scope of the NEO are welcome. Papers considered to fit the scope of the journal and to be of sufficient quality after evaluation by the reviewers will be published free of charge. 

The aim of this Special Issue is to collect papers on the intersection of numerical and evolutionary optimization. We strongly encourage the development of fast and reliable hybrid methods that maximize the strengths and minimize the weaknesses of each underlying paradigm while also being applicable to a broader class of problems. Moreover, this Special Issue aims to foster the understanding and adequate treatment of real-world problems, particularly in emerging fields that affect us all, such as healthcare, smart cities, and big data, among many others. 

Topics of interest include (but are not limited to) the following:

A) Search and Optimization:
Single- and multi-objective optimization
Mathematical programming techniques
Evolutionary algorithms
Genetic programming
Hybrid and memetic algorithms
Set-oriented numerics
Stochastic optimization
Robust optimization 

B) Real-World Problems:
Optimization, Machine Learning, and Metaheuristics applied to:
Energy production and consumption
Health monitoring systems
Computer vision and pattern recognition
Energy optimization and prediction
Modeling and control of real-world energy systems
Smart cities

Dr. Marcela Quiroz
Dr. Luis Gerardo de la Fraga
Dr. Adriana Lara
Dr. Leonardo Trujillo
Prof. Dr. Oliver Schütze
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All papers will be peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Mathematical and Computational Applications is an international peer-reviewed open access quarterly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Published Papers (12 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Editorial

Jump to: Research

3 pages, 170 KiB  
Editorial
Numerical and Evolutionary Optimization 2021
by Marcela Quiroz-Castellanos, Luis Gerardo de la Fraga, Adriana Lara, Leonardo Trujillo and Oliver Schütze
Math. Comput. Appl. 2023, 28(3), 71; https://doi.org/10.3390/mca28030071 - 23 May 2023
Viewed by 975
Abstract
This Special Issue was inspired by the 9th International Workshop on Numerical and Evolutionary Optimization (NEO 2021) held—due to the COVID-19 pandemic—as an online-only event from 8 to 10 September 2021 [...] Full article
(This article belongs to the Special Issue Numerical and Evolutionary Optimization 2021)

Research

Jump to: Editorial

10 pages, 2343 KiB  
Article
Finding the Conjectured Sequence of Largest Small n-Polygons by Numerical Optimization
by János D. Pintér, Frank J. Kampas and Ignacio Castillo
Math. Comput. Appl. 2022, 27(3), 42; https://doi.org/10.3390/mca27030042 - 16 May 2022
Cited by 3 | Viewed by 1485
Abstract
LSP(n), the largest small polygon with n vertices, is a polygon with a unit diameter that has a maximal of area A(n). It is known that for all odd values n3 [...] Read more.
LSP(n), the largest small polygon with n vertices, is a polygon with a unit diameter that has a maximal of area A(n). It is known that for all odd values n3, LSP(n) is a regular n-polygon; however, this statement is not valid even for values of n. Finding the polygon LSP(n) and A(n) for even values n6 has been a long-standing challenge. In this work, we developed high-precision numerical solution estimates of A(n) for even values n4, using the Mathematica model development environment and the IPOPT local nonlinear optimization solver engine. First, we present a revised (tightened) LSP model that greatly assists in the efficient numerical solution of the model-class considered. This is followed by results for an illustrative sequence of even values of n, up to n1000. Most of the earlier research addressed special cases up to n20, while others obtained numerical optimization results for a range of values from 6n100. The results obtained were used to provide regression model-based estimates of the optimal area sequence {A(n)}, for even values n of interest, thereby essentially solving the LSP model-class numerically, with demonstrably high precision. Full article
(This article belongs to the Special Issue Numerical and Evolutionary Optimization 2021)
Show Figures

Figure 1

25 pages, 51555 KiB  
Article
Applications of ANFIS-Type Methods in Simulation of Systems in Marine Environments
by Aakanksha Jain, Iman Bahreini Toussi, Abdolmajid Mohammadian, Hossein Bonakdari and Majid Sartaj
Math. Comput. Appl. 2022, 27(2), 29; https://doi.org/10.3390/mca27020029 - 21 Mar 2022
Cited by 3 | Viewed by 1944
Abstract
ANFIS-type algorithms have been used in various modeling and simulation problems. With the help of algorithms with more accuracy and adaptability, it is possible to obtain better real-life emulating models. A critical environmental problem is the discharge of saline industrial effluents in the [...] Read more.
ANFIS-type algorithms have been used in various modeling and simulation problems. With the help of algorithms with more accuracy and adaptability, it is possible to obtain better real-life emulating models. A critical environmental problem is the discharge of saline industrial effluents in the form of buoyant jets into water bodies. Given the potentially harmful effects of the discharge effluents from desalination plants on the marine environment and the coastal ecosystem, minimizing such an effect is crucial. Hence, it is important to design the outfall system properly to reduce these impacts. To the best of the authors’ knowledge, a study that formulates the effluent discharge to find an optimum numerical model under the conditions considered here using AI methods has not been completed before. In this study, submerged discharges, specifically, negatively buoyant jets are modeled. The objective of this study is to compare various artificial intelligence algorithms along with multivariate regression models to find the best fit model emulating effluent discharge and determine the model with less computational time. This is achieved by training and testing the Adaptive Neuro-Fuzzy Inference System (ANFIS), ANFIS-Genetic Algorithm (GA), ANFIS-Particle Swarm Optimization (PSO) and ANFIS-Firefly Algorithm (FFA) models with input parameters, which are obtained by using the realizable k-ε turbulence model, and simulated parameters, which are obtained after modeling the turbulent jet using the OpenFOAM simulation platform. A comparison of the realizable k-ε turbulence model outputs and AI algorithms’ outputs is conducted in this study. Statistical parameters such as least error, coefficient of determination (R2), Mean Absolute Error (MAE), and Average Absolute Deviation (AED) are measured to evaluate the performance of the models. In this work, it is found that ANFIS-PSO performs better compared to the other four models and the multivariate regression model. It is shown that this model provides better R2, MAE, and AED, however, the non-hybrid ANFIS model provides reasonably acceptable results with lower computational costs. The results of the study demonstrate an error of 6.908% as the best-case scenario in the AI models. Full article
(This article belongs to the Special Issue Numerical and Evolutionary Optimization 2021)
Show Figures

Figure 1

18 pages, 1136 KiB  
Article
Evaluation of Machine Learning Algorithms for Early Diagnosis of Deep Venous Thrombosis
by Eduardo Enrique Contreras-Luján, Enrique Efrén García-Guerrero, Oscar Roberto López-Bonilla, Esteban Tlelo-Cuautle, Didier López-Mancilla and Everardo Inzunza-González
Math. Comput. Appl. 2022, 27(2), 24; https://doi.org/10.3390/mca27020024 - 04 Mar 2022
Cited by 4 | Viewed by 4047
Abstract
Deep venous thrombosis (DVT) is a disease that must be diagnosed quickly, as it can trigger the death of patients. Nowadays, one can find different ways to determine it, including clinical scoring, D-dimer, ultrasonography, etc. Recently, scientists have focused efforts on using machine [...] Read more.
Deep venous thrombosis (DVT) is a disease that must be diagnosed quickly, as it can trigger the death of patients. Nowadays, one can find different ways to determine it, including clinical scoring, D-dimer, ultrasonography, etc. Recently, scientists have focused efforts on using machine learning (ML) and neural networks for disease diagnosis, progressively increasing the accuracy and efficacy. Patients with suspected DVT have no apparent symptoms. Using pattern recognition techniques, aiding good timely diagnosis, as well as well-trained ML models help to make good decisions and validation. The aim of this paper is to propose several ML models for a more efficient and reliable DVT diagnosis through its implementation on an edge device for the development of instruments that are smart, portable, reliable, and cost-effective. The dataset was obtained from a state-of-the-art article. It is divided into 85% for training and cross-validation and 15% for testing. The input data in this study are the Wells criteria, the patient’s age, and the patient’s gender. The output data correspond to the patient’s diagnosis. This study includes the evaluation of several classifiers such as Decision Trees (DT), Extra Trees (ET), K-Nearest Neighbor (KNN), Multi-Layer Perceptron Neural Network (MLP-NN), Random Forest (RF), and Support Vector Machine (SVM). Finally, the implementation of these ML models on a high-performance embedded system is proposed to develop an intelligent system for early DVT diagnosis. It is reliable, portable, open source, and low cost. The performance of different ML algorithms was evaluated, where KNN achieved the highest accuracy of 90.4% and specificity of 80.66% implemented on personal computer (PC) and Raspberry Pi 4 (RPi4). The accuracy of all trained models on PC and Raspberry Pi 4 is greater than 85%, while the area under the curve (AUC) values are between 0.81 and 0.86. In conclusion, as compared to traditional methods, the best ML classifiers are effective at predicting DVT in an early and efficient manner. Full article
(This article belongs to the Special Issue Numerical and Evolutionary Optimization 2021)
Show Figures

Figure 1

18 pages, 580 KiB  
Article
Variable Decomposition for Large-Scale Constrained Optimization Problems Using a Grouping Genetic Algorithm
by Guadalupe Carmona-Arroyo, Marcela Quiroz-Castellanos and Efrén Mezura-Montes
Math. Comput. Appl. 2022, 27(2), 23; https://doi.org/10.3390/mca27020023 - 03 Mar 2022
Cited by 2 | Viewed by 2137
Abstract
Several real optimization problems are very difficult, and their optimal solutions cannot be found with a traditional method. Moreover, for some of these problems, the large number of decision variables is a major contributing factor to their complexity; they are known as Large-Scale [...] Read more.
Several real optimization problems are very difficult, and their optimal solutions cannot be found with a traditional method. Moreover, for some of these problems, the large number of decision variables is a major contributing factor to their complexity; they are known as Large-Scale Optimization Problems, and various strategies have been proposed to deal with them. One of the most popular tools is called Cooperative Co-Evolution, which works through a decomposition of the decision variables into smaller subproblems or variables subgroups, which are optimized separately and cooperate to finally create a complete solution of the original problem. This kind of decomposition can be handled as a combinatorial optimization problem where we want to group variables that interact with each other. In this work, we propose a Grouping Genetic Algorithm to optimize the variable decomposition by reducing their interaction. Although the Cooperative Co-Evolution approach is widely used to deal with unconstrained optimization problems, there are few works related to constrained problems. Therefore, our experiments were performed on a test benchmark of 18 constrained functions under 100, 500, and 1000 variables. The results obtained indicate that a Grouping Genetic Algorithm is an appropriate tool to optimize the variable decomposition for Large-Scale Constrained Optimization Problems, outperforming the decomposition obtained by a state-of-the-art genetic algorithm. Full article
(This article belongs to the Special Issue Numerical and Evolutionary Optimization 2021)
Show Figures

Figure 1

20 pages, 28880 KiB  
Article
Attention Measurement of an Autism Spectrum Disorder User Using EEG Signals: A Case Study
by José Jaime Esqueda-Elizondo, Reyes Juárez-Ramírez, Oscar Roberto López-Bonilla, Enrique Efrén García-Guerrero, Gilberto Manuel Galindo-Aldana, Laura Jiménez-Beristáin, Alejandra Serrano-Trujillo, Esteban Tlelo-Cuautle and Everardo Inzunza-González
Math. Comput. Appl. 2022, 27(2), 21; https://doi.org/10.3390/mca27020021 - 02 Mar 2022
Cited by 12 | Viewed by 5279
Abstract
Autism Spectrum Disorder (ASD) is a neurodevelopmental life condition characterized by problems with social interaction, low verbal and non-verbal communication skills, and repetitive and restricted behavior. People with ASD usually have variable attention levels because they have hypersensitivity and large amounts of environmental [...] Read more.
Autism Spectrum Disorder (ASD) is a neurodevelopmental life condition characterized by problems with social interaction, low verbal and non-verbal communication skills, and repetitive and restricted behavior. People with ASD usually have variable attention levels because they have hypersensitivity and large amounts of environmental information are a problem for them. Attention is a process that occurs at the cognitive level and allows us to orient ourselves towards relevant stimuli, ignoring those that are not, and act accordingly. This paper presents a methodology based on electroencephalographic (EEG) signals for attention measurement in a 13-year-old boy diagnosed with ASD. The EEG signals are acquired with an Epoc+ Brain–Computer Interface (BCI) via the Emotiv Pro platform while developing several learning activities and using Matlab 2019a for signal processing. For this article, we propose to use electrodes F3, F4, P7, and P8. Then, we calculate the band power spectrum density to detect the Theta Relative Power (TRP), Alpha Relative Power (ARP), Beta Relative Power (BRP), Theta–Beta Ratio (TBR), Theta–Alpha Ratio (TAR), and Theta/(Alpha+Beta), which are features related to attention detection and neurofeedback. We train and evaluate several machine learning (ML) models with these features. In this study, the multi-layer perceptron neural network model (MLP-NN) has the best performance, with an AUC of 0.9299, Cohen’s Kappa coefficient of 0.8597, Matthews correlation coefficient of 0.8602, and Hamming loss of 0.0701. These findings make it possible to develop better learning scenarios according to the person’s needs with ASD. Moreover, it makes it possible to obtain quantifiable information on their progress to reinforce the perception of the teacher or therapist. Full article
(This article belongs to the Special Issue Numerical and Evolutionary Optimization 2021)
Show Figures

Figure 1

25 pages, 4188 KiB  
Article
AutoML for Feature Selection and Model Tuning Applied to Fault Severity Diagnosis in Spur Gearboxes
by Mariela Cerrada, Leonardo Trujillo, Daniel E. Hernández, Horacio A. Correa Zevallos, Jean Carlo Macancela, Diego Cabrera and René Vinicio Sánchez
Math. Comput. Appl. 2022, 27(1), 6; https://doi.org/10.3390/mca27010006 - 13 Jan 2022
Cited by 13 | Viewed by 3687
Abstract
Gearboxes are widely used in industrial processes as mechanical power transmission systems. Then, gearbox failures can affect other parts of the system and produce economic loss. The early detection of the possible failure modes and their severity assessment in such devices is an [...] Read more.
Gearboxes are widely used in industrial processes as mechanical power transmission systems. Then, gearbox failures can affect other parts of the system and produce economic loss. The early detection of the possible failure modes and their severity assessment in such devices is an important field of research. Data-driven approaches usually require an exhaustive development of pipelines including models’ parameter optimization and feature selection. This paper takes advantage of the recent Auto Machine Learning (AutoML) tools to propose proper feature and model selection for three failure modes under different severity levels: broken tooth, pitting and crack. The performance of 64 statistical condition indicators (SCI) extracted from vibration signals under the three failure modes were analyzed by two AutoML systems, namely the H2O Driverless AI platform and TPOT, both of which include feature engineering and feature selection mechanisms. In both cases, the systems converged to different types of decision tree methods, with ensembles of XGBoost models preferred by H2O while TPOT generated different types of stacked models. The models produced by both systems achieved very high, and practically equivalent, performances on all problems. Both AutoML systems converged to pipelines that focus on very similar subsets of features across all problems, indicating that several problems in this domain can be solved by a rather small set of 10 common features, with accuracy up to 90%. This latter result is important in the research of useful feature selection for gearbox fault diagnosis. Full article
(This article belongs to the Special Issue Numerical and Evolutionary Optimization 2021)
Show Figures

Figure 1

13 pages, 520 KiB  
Article
Analysis and Detection of Erosion in Wind Turbine Blades
by Josué Enríquez Zárate, María de los Ángeles Gómez López, Javier Alberto Carmona Troyo and Leonardo Trujillo
Math. Comput. Appl. 2022, 27(1), 5; https://doi.org/10.3390/mca27010005 - 13 Jan 2022
Cited by 5 | Viewed by 3122
Abstract
This paper studies erosion at the tip of wind turbine blades by considering aerodynamic analysis, modal analysis and predictive machine learning modeling. Erosion can be caused by several factors and can affect different parts of the blade, reducing its dynamic performance and useful [...] Read more.
This paper studies erosion at the tip of wind turbine blades by considering aerodynamic analysis, modal analysis and predictive machine learning modeling. Erosion can be caused by several factors and can affect different parts of the blade, reducing its dynamic performance and useful life. The ability to detect and quantify erosion on a blade is an important predictive maintenance task for wind turbines that can have broad repercussions in terms of avoiding serious damage, improving power efficiency and reducing downtimes. This study considers both sides of the leading edge of the blade (top and bottom), evaluating the mechanical imbalance caused by the material loss that induces variations of the power coefficient resulting in a loss in efficiency. The QBlade software is used in our analysis and load calculations are preformed by using blade element momentum theory. Numerical results show the performance of a blade based on the relationship between mechanical damage and aerodynamic behavior, which are then validated on a physical model. Moreover, two machine learning (ML) problems are posed to automatically detect the location of erosion (top of the edge, bottom or both) and to determine erosion levels (from 8% to 18%) present in the blade. The first problem is solved using classification models, while the second is solved using ML regression, achieving accurate results. ML pipelines are automatically designed by using an AutoML system with little human intervention, achieving highly accurate results. This work makes several contributions by developing ML models to both detect the presence and location of erosion on a blade, estimating its level and applying AutoML for the first time in this domain. Full article
(This article belongs to the Special Issue Numerical and Evolutionary Optimization 2021)
Show Figures

Figure 1

15 pages, 1413 KiB  
Article
Time to Critical Condition in Emergency Services
by Pedro A. Pury
Math. Comput. Appl. 2021, 26(4), 70; https://doi.org/10.3390/mca26040070 - 30 Sep 2021
Cited by 1 | Viewed by 2371
Abstract
Providing uninterrupted response service is of paramount importance for emergency medical services, regardless of the operating scenario. Thus, reliable estimates of the time to the critical condition, under which there will be no available servers to respond to the next incoming call, become [...] Read more.
Providing uninterrupted response service is of paramount importance for emergency medical services, regardless of the operating scenario. Thus, reliable estimates of the time to the critical condition, under which there will be no available servers to respond to the next incoming call, become very useful measures of the system’s performance. In this contribution, we develop a key performance indicator by providing an explicit formula for the average time to the shortage condition. Our analytical expression for this average time is a function of the number of parallel servers and the inter-arrival and service times. We assume exponential distributions of times in our analytical expression, but for evaluating the mean first-passage time to the critical condition under more realistic scenarios, we validate our result through exhaustive simulations with lognormal service time distributions. For this task, we have implemented a simulator in R. Our results indicate that our analytical formula is an acceptable approximation under any situation of practical interest. Full article
(This article belongs to the Special Issue Numerical and Evolutionary Optimization 2021)
Show Figures

Figure 1

27 pages, 8314 KiB  
Article
A Hybrid Estimation of Distribution Algorithm for the Quay Crane Scheduling Problem
by Ricardo Pérez-Rodríguez
Math. Comput. Appl. 2021, 26(3), 64; https://doi.org/10.3390/mca26030064 - 10 Sep 2021
Cited by 4 | Viewed by 2168
Abstract
The aim of the quay crane scheduling problem (QCSP) is to identify the best sequence of discharging and loading operations for a set of quay cranes. This problem is solved with a new hybrid estimation of distribution algorithm (EDA). The approach is proposed [...] Read more.
The aim of the quay crane scheduling problem (QCSP) is to identify the best sequence of discharging and loading operations for a set of quay cranes. This problem is solved with a new hybrid estimation of distribution algorithm (EDA). The approach is proposed to tackle the drawbacks of the EDAs, i.e., the lack of diversity of solutions and poor ability of exploitation. The hybridization approach, used in this investigation, uses a distance based ranking model and the moth-flame algorithm. The distance based ranking model is in charge of modelling the solution space distribution, through an exponential function, by measuring the distance between solutions; meanwhile, the heuristic moth-flame determines who would be the offspring, with a spiral function that identifies the new locations for the new solutions. Based on the results, the proposed scheme, called QCEDA, works to enhance the performance of those other EDAs that use complex probability models. The dispersion results of the QCEDA scheme are less than the other algorithms used in the comparison section. This means that the solutions found by the QCEDA are more concentrated around the best value than other algorithms, i.e., the average of the solutions of the QCEDA converges better than other approaches to the best found value. Finally, as a conclusion, the hybrid EDAs have a better performance, or equal in effectiveness, than the so called pure EDAs. Full article
(This article belongs to the Special Issue Numerical and Evolutionary Optimization 2021)
Show Figures

Figure 1

17 pages, 2903 KiB  
Article
Preserving Geo-Indistinguishability of the Emergency Scene to Predict Ambulance Response Time
by Héber H. Arcolezi, Selene Cerna, Christophe Guyeux and Jean-François Couchot
Math. Comput. Appl. 2021, 26(3), 56; https://doi.org/10.3390/mca26030056 - 04 Aug 2021
Cited by 6 | Viewed by 4005
Abstract
Emergency medical services (EMS) provide crucial emergency assistance and ambulatory services. One key measurement of EMS’s quality of service is their ambulances’ response time (ART), which generally refers to the period between EMS notification and the moment an ambulance arrives on the scene. [...] Read more.
Emergency medical services (EMS) provide crucial emergency assistance and ambulatory services. One key measurement of EMS’s quality of service is their ambulances’ response time (ART), which generally refers to the period between EMS notification and the moment an ambulance arrives on the scene. Due to many victims requiring care within adequate time (e.g., cardiac arrest), improving ARTs is vital. This paper proposes to predict ARTs using machine-learning (ML) techniques, which could be used as a decision-support system by EMS to allow a dynamic selection of ambulance dispatch centers. However, one well-known predictor of ART is the location of the emergency (e.g., if it is urban or rural areas), which is sensitive data because it can reveal who received care and for which reason. Thus, we considered the ‘input perturbation’ setting in the privacy-preserving ML literature, which allows EMS to sanitize each location data independently and, hence, ML models are trained only with sanitized data. In this paper, geo-indistinguishability was applied to sanitize each emergency location data, which is a state-of-the-art formal notion based on differential privacy. To validate our proposals, we used retrospective data of an EMS in France, namely Departmental Fire and Rescue Service of Doubs, and publicly available data (e.g., weather and traffic data). As shown in the results, the sanitization of location data and the perturbation of its associated features (e.g., city, distance) had no considerable impact on predicting ARTs. With these findings, EMSs may prefer using and/or sharing sanitized datasets to avoid possible data leakages, membership inference attacks, or data reconstructions, for example. Full article
(This article belongs to the Special Issue Numerical and Evolutionary Optimization 2021)
Show Figures

Figure 1

10 pages, 473 KiB  
Article
Solving a Real-Life Distributor’s Pallet Loading Problem
by Mauro Dell’Amico and Matteo Magnani
Math. Comput. Appl. 2021, 26(3), 53; https://doi.org/10.3390/mca26030053 - 19 Jul 2021
Cited by 4 | Viewed by 3170
Abstract
We consider the distributor’s pallet loading problem where a set of different boxes are packed on the smallest number of pallets by satisfying a given set of constraints. In particular, we refer to a real-life environment where each pallet is loaded with a [...] Read more.
We consider the distributor’s pallet loading problem where a set of different boxes are packed on the smallest number of pallets by satisfying a given set of constraints. In particular, we refer to a real-life environment where each pallet is loaded with a set of layers made of boxes, and both a stability constraint and a compression constraint must be respected. The stability requirement imposes the following: (a) to load at level k+1 a layer with total area (i.e., the sum of the bottom faces’ area of the boxes present in the layer) not exceeding α times the area of the layer of level k (where α1), and (b) to limit with a given threshold the difference between the highest and the lowest box of a layer. The compression constraint defines the maximum weight that each layer k can sustain; hence, the total weight of the layers loaded over k must not exceed that value. Some stability and compression constraints are considered in other works, but to our knowledge, none are defined as faced in a real-life problem. We present a matheuristic approach which works in two phases. In the first, a number of layers are defined using classical 2D bin packing algorithms, applied to a smart selection of boxes. In the second phase, the layers are packed on the minimum number of pallets by means of a specialized MILP model solved with Gurobi. Computational experiments on real-life instances are used to assess the effectiveness of the algorithm. Full article
(This article belongs to the Special Issue Numerical and Evolutionary Optimization 2021)
Show Figures

Figure 1

Back to TopTop