Algorithms in Decision Support Systems

A special issue of Algorithms (ISSN 1999-4893). This special issue belongs to the section "Databases and Data Structures".

Deadline for manuscript submissions: closed (31 October 2020) | Viewed by 35461

Printed Edition Available!
A printed edition of this Special Issue is available here.

Special Issue Editor


E-Mail Website
Guest Editor
Department of Computer Science, University of Oviedo, 33007 Oviedo, Spain
Interests: domain-specific languages; model-driven engineering; business process management; machine learning; Internet of Things and e-learning
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

Decision support systems (DSSs) are increasingly important information systems that help to make decisions related to unstructured and semi-unstructured decision problems that do not have a simple solution from a human point of view. They are currently used in different areas, such as medical diagnosis, catastrophe avoidance, agriculture, sustainable development, sales projections, inventory organization, production design, etc. The arquitecture of a common DSS is basically composed of three main components: (1) knowledge base; (2) user interface; and (3) model to infer the decisions. Such models may be based on multiple types of algorithms, such as neural networks, logistic regression, classification trees, fuzzy logic, etc. Although there are multiple works that try to optimize the operation of DSSs, researchers are still trying to optimize their performance by refining and proposing new algorithms that normally are adapted to the set of data available for a particular domain of knowledge. Thus, the aim of this Special Issue is to enhance the state-of-the-art in this area significantly, improving the performance of DSSs in specific domains. We encourage authors across the world to submit their original and unpublished works. We have a special interest in works focusing on the topics listed below, but we are open to other works that fit the theme of the Special Issue.

Dr. Vicente García-Díaz
Guest Editor

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Algorithms is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 1600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • Atmospheric models
  • Data mining algorithms
  • Deep learning algorithms
  • Evolutionary algorithms
  • Fuzzy logic
  • Genetic algorithms
  • Machine learning algorithms
  • Probabilistic reasoning
  • Rule-based algorithms
  • Statistical methods

Related Special Issue

Published Papers (8 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

18 pages, 970 KiB  
Article
Design Limitations, Errors and Hazards in Creating Decision Support Platforms with Large- and Very Large-Scale Data and Program Cores
by Elias Koukoutsis, Constantin Papaodysseus, George Tsavdaridis, Nikolaos V. Karadimas, Athanasios Ballis, Eirini Mamatsi and Athanasios Rafail Mamatsis
Algorithms 2020, 13(12), 341; https://doi.org/10.3390/a13120341 - 14 Dec 2020
Cited by 1 | Viewed by 2717
Abstract
Recently, very large-scale decision support systems (DSSs) have been developed, which tackle very complex problems, associated with very extensive and polymorphic information, which probably is geographically highly dispersed. The management, updating, modification and upgrading of the data and program core of such an [...] Read more.
Recently, very large-scale decision support systems (DSSs) have been developed, which tackle very complex problems, associated with very extensive and polymorphic information, which probably is geographically highly dispersed. The management, updating, modification and upgrading of the data and program core of such an information system is, as a rule, a very difficult task, which encompasses many hazards and risks. The purpose of the present work was (a) to list the more significant of these hazards and risks and (b) to introduce a new general methodology for designing decision support (DS) systems that are robust and circumvent these risks. The core of this new approach was the introduction of a meta-database, called teleological, on the base of which management, updating, modification, reduction, growth and upgrading of the system may be safely and efficiently achieved. The very same teleological meta-database can be used for the construction of a sound decision support system, incorporating elements of a previous one at a future stage. Full article
(This article belongs to the Special Issue Algorithms in Decision Support Systems)
Show Figures

Figure 1

30 pages, 733 KiB  
Article
An Evaluation Framework and Algorithms for Train Rescheduling
by Sai Prashanth Josyula, Johanna Törnquist Krasemann and Lars Lundberg
Algorithms 2020, 13(12), 332; https://doi.org/10.3390/a13120332 - 11 Dec 2020
Cited by 4 | Viewed by 3315
Abstract
In railway traffic systems, whenever disturbances occur, it is important to effectively reschedule trains while optimizing the goals of various stakeholders. Algorithms can provide significant benefits to support the traffic controllers in train rescheduling, if well integrated into the overall traffic management process. [...] Read more.
In railway traffic systems, whenever disturbances occur, it is important to effectively reschedule trains while optimizing the goals of various stakeholders. Algorithms can provide significant benefits to support the traffic controllers in train rescheduling, if well integrated into the overall traffic management process. In the railway research literature, many algorithms are proposed to tackle different versions of the train rescheduling problem. However, limited research has been performed to assess the capabilities and performance of alternative approaches, with the purpose of identifying their main strengths and weaknesses. Evaluation of train rescheduling algorithms enables practitioners and decision support systems to select a suitable algorithm based on the properties of the type of disturbance scenario in focus. It also guides researchers and algorithm designers in improving the algorithms. In this paper, we (1) propose an evaluation framework for train rescheduling algorithms, (2) present two train rescheduling algorithms: a heuristic and a MILP-based exact algorithm, and (3) conduct an experiment to compare the two multi-objective algorithms using the proposed framework (a proof-of-concept). It is found that the heuristic algorithm is suitable for solving simpler disturbance scenarios since it is quick in producing decent solutions. For complex disturbances wherein multiple trains experience a primary delay due to an infrastructure failure, the exact algorithm is found to be more appropriate. Full article
(This article belongs to the Special Issue Algorithms in Decision Support Systems)
Show Figures

Figure 1

15 pages, 2314 KiB  
Article
Efficient Rule Generation for Associative Classification
by Chartwut Thanajiranthorn and Panida Songram
Algorithms 2020, 13(11), 299; https://doi.org/10.3390/a13110299 - 17 Nov 2020
Cited by 4 | Viewed by 2791
Abstract
Associative classification (AC) is a mining technique that integrates classification and association rule mining to perform classification on unseen data instances. AC is one of the effective classification techniques that applies the generated rules to perform classification. In particular, the number of frequent [...] Read more.
Associative classification (AC) is a mining technique that integrates classification and association rule mining to perform classification on unseen data instances. AC is one of the effective classification techniques that applies the generated rules to perform classification. In particular, the number of frequent ruleitems generated by AC is inherently designated by the degree of certain minimum supports. A low minimum support can potentially generate a large set of ruleitems. This can be one of the major drawbacks of AC when some of the ruleitems are not used in the classification stage, and thus (to reduce the rule-mapping time), they are required to be removed from the set. This pruning process can be a computational burden and massively consumes memory resources. In this paper, a new AC algorithm is proposed to directly discover a compact number of efficient rules for classification without the pruning process. A vertical data representation technique is implemented to avoid redundant rule generation and to reduce time used in the mining process. The experimental results show that the proposed algorithm archives in terms of accuracy a number of generated ruleitems, classifier building time, and memory consumption, especially when compared to the well-known algorithms, Classification-based Association (CBA), Classification based on Multiple Association Rules (CMAR), and Fast Associative Classification Algorithm (FACA). Full article
(This article belongs to the Special Issue Algorithms in Decision Support Systems)
Show Figures

Figure 1

11 pages, 2891 KiB  
Article
A Comparison of Ensemble and Dimensionality Reduction DEA Models Based on Entropy Criterion
by Parag C. Pendharkar
Algorithms 2020, 13(9), 232; https://doi.org/10.3390/a13090232 - 16 Sep 2020
Viewed by 2003
Abstract
Dimensionality reduction research in data envelopment analysis (DEA) has focused on subjective approaches to reduce dimensionality. Such approaches are less useful or attractive in practice because a subjective selection of variables introduces bias. A competing unbiased approach would be to use ensemble DEA [...] Read more.
Dimensionality reduction research in data envelopment analysis (DEA) has focused on subjective approaches to reduce dimensionality. Such approaches are less useful or attractive in practice because a subjective selection of variables introduces bias. A competing unbiased approach would be to use ensemble DEA scores. This paper illustrates that in addition to unbiased evaluations, the ensemble DEA scores result in unique rankings that have high entropy. Under restrictive assumptions, it is also shown that the ensemble DEA scores are normally distributed. Ensemble models do not require any new modifications to existing DEA objective functions or constraints, and when ensemble scores are normally distributed, returns-to-scale hypothesis testing can be carried out using traditional parametric statistical techniques. Full article
(This article belongs to the Special Issue Algorithms in Decision Support Systems)
Show Figures

Figure 1

15 pages, 299 KiB  
Article
Diagnosis in Tennis Serving Technique
by Eugenio Roanes-Lozano, Eduardo A. Casella, Fernando Sánchez and Antonio Hernando
Algorithms 2020, 13(5), 106; https://doi.org/10.3390/a13050106 - 25 Apr 2020
Cited by 5 | Viewed by 3867
Abstract
Tennis is a sport with a very complex technique. Amateur tennis players have trainers and/or coaches, but are not usually accompanied by them to championships. Curiously, in this sport, the result of many matches can be changed by a small hint like ‘hit [...] Read more.
Tennis is a sport with a very complex technique. Amateur tennis players have trainers and/or coaches, but are not usually accompanied by them to championships. Curiously, in this sport, the result of many matches can be changed by a small hint like ‘hit the ball a little higher when serving’. However, the biomechanical of a tennis stroke is only clear to an expert. We, therefore, developed a prototype of a rule-based expert system (RBES) aimed at an amateur competition player that is not accompanied by his/her coach to a championship and is not serving as usual (the RBES is so far restricted to serving). The player has to answer a set of questions about how he/she is serving that day and his/her usual serving technique and the RBES obtains a diagnosis using logic inference about the possible reasons (according of the logic rules that have been previously given to the RBES). A certain knowledge of the tennis terminology and technique is required from the player, but that is something known at this level. The underlying logic is Boolean and the inference engine is algebraic (it uses Groebner bases). Full article
(This article belongs to the Special Issue Algorithms in Decision Support Systems)
Show Figures

Figure 1

20 pages, 8772 KiB  
Article
Decision Support System for Fitting and Mapping Nonlinear Functions with Application to Insect Pest Management in the Biological Control Context
by Ritter A. Guimapi, Samira A. Mohamed, Lisa Biber-Freudenberger, Waweru Mwangi, Sunday Ekesi, Christian Borgemeister and Henri E. Z. Tonnang
Algorithms 2020, 13(4), 104; https://doi.org/10.3390/a13040104 - 24 Apr 2020
Cited by 6 | Viewed by 4217
Abstract
The process of moving from experimental data to modeling and characterizing the dynamics and interactions in natural processes is a challenging task. This paper proposes an interactive platform for fitting data derived from experiments to mathematical expressions and carrying out spatial visualization. The [...] Read more.
The process of moving from experimental data to modeling and characterizing the dynamics and interactions in natural processes is a challenging task. This paper proposes an interactive platform for fitting data derived from experiments to mathematical expressions and carrying out spatial visualization. The platform is designed using a component-based software architectural approach, implemented in R and the Java programming languages. It uses experimental data as input for model fitting, then applies the obtained model at the landscape level via a spatial temperature grid data to yield regional and continental maps. Different modules and functionalities of the tool are presented with a case study, in which the tool is used to establish a temperature-dependent virulence model and map the potential zone of efficacy of a fungal-based biopesticide. The decision support system (DSS) was developed in generic form, and it can be used by anyone interested in fitting mathematical equations to experimental data collected following the described protocol and, depending on the type of investigation, it offers the possibility of projecting the model at the landscape level. Full article
(This article belongs to the Special Issue Algorithms in Decision Support Systems)
Show Figures

Figure 1

19 pages, 580 KiB  
Article
A Case Study for a Big Data and Machine Learning Platform to Improve Medical Decision Support in Population Health Management
by Fernando López-Martínez, Edward Rolando Núñez-Valdez, Vicente García-Díaz and Zoran Bursac
Algorithms 2020, 13(4), 102; https://doi.org/10.3390/a13040102 - 23 Apr 2020
Cited by 26 | Viewed by 9108
Abstract
Big data and artificial intelligence are currently two of the most important and trending pieces for innovation and predictive analytics in healthcare, leading the digital healthcare transformation. Keralty organization is already working on developing an intelligent big data analytic platform based on machine [...] Read more.
Big data and artificial intelligence are currently two of the most important and trending pieces for innovation and predictive analytics in healthcare, leading the digital healthcare transformation. Keralty organization is already working on developing an intelligent big data analytic platform based on machine learning and data integration principles. We discuss how this platform is the new pillar for the organization to improve population health management, value-based care, and new upcoming challenges in healthcare. The benefits of using this new data platform for community and population health include better healthcare outcomes, improvement of clinical operations, reducing costs of care, and generation of accurate medical information. Several machine learning algorithms implemented by the authors can use the large standardized datasets integrated into the platform to improve the effectiveness of public health interventions, improving diagnosis, and clinical decision support. The data integrated into the platform come from Electronic Health Records (EHR), Hospital Information Systems (HIS), Radiology Information Systems (RIS), and Laboratory Information Systems (LIS), as well as data generated by public health platforms, mobile data, social media, and clinical web portals. This massive volume of data is integrated using big data techniques for storage, retrieval, processing, and transformation. This paper presents the design of a digital health platform in a healthcare organization in Colombia to integrate operational, clinical, and business data repositories with advanced analytics to improve the decision-making process for population health management. Full article
(This article belongs to the Special Issue Algorithms in Decision Support Systems)
Show Figures

Figure 1

11 pages, 1167 KiB  
Article
An Unknown Radar Emitter Identification Method Based on Semi-Supervised and Transfer Learning
by Yuntian Feng, Guoliang Wang, Zhipeng Liu, Runming Feng, Xiang Chen and Ning Tai
Algorithms 2019, 12(12), 271; https://doi.org/10.3390/a12120271 - 16 Dec 2019
Cited by 15 | Viewed by 3669
Abstract
Aiming at the current problem that it is difficult to deal with an unknown radar emitter in the radar emitter identification process, we propose an unknown radar emitter identification method based on semi-supervised and transfer learning. Firstly, we construct the support vector machine [...] Read more.
Aiming at the current problem that it is difficult to deal with an unknown radar emitter in the radar emitter identification process, we propose an unknown radar emitter identification method based on semi-supervised and transfer learning. Firstly, we construct the support vector machine (SVM) model based on transfer learning, using the information of labeled samples in the source domain to train in the target domain, which can solve the problem that the training data and the testing data do not satisfy the same-distribution hypothesis. Then, we design a semi-supervised co-training algorithm using the information of unlabeled samples to enhance the training effect, which can solve the problem that insufficient labeled data results in inadequate training of the classifier. Finally, we combine the transfer learning method with the semi-supervised learning method for the unknown radar emitter identification task. Simulation experiments show that the proposed method can effectively identify an unknown radar emitter and still maintain high identification accuracy within a certain measurement error range. Full article
(This article belongs to the Special Issue Algorithms in Decision Support Systems)
Show Figures

Figure 1

Back to TopTop