Next Article in Journal
Acknowledgment to the Reviewers of Modelling in 2022
Previous Article in Journal
Damage Evolution Prediction during 2D Scale-Model Tests of a Rubble-Mound Breakwater: A Case Study of Ericeira’s Breakwater
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

IndShaker: A Knowledge-Based Approach to Enhance Multi-Perspective System Dynamics Analysis

School of Computer Science, Faculty of Engineering and IT, University of Technology Sydney, Ultimo, NSW 2007, Australia
Modelling 2023, 4(1), 19-34; https://doi.org/10.3390/modelling4010002
Submission received: 27 September 2022 / Revised: 24 November 2022 / Accepted: 20 December 2022 / Published: 23 December 2022

Abstract

:
Decision making as a result of system dynamics analysis requires, in practice, a straightforward and systematic modeling capability as well as a high-level of customization and flexibility to adapt to situations and environments that may vary very much from each other. While in general terms a completely generic approach could be not as effective as ad hoc solutions, the proper application of modern technology may facilitate agile strategies as a result of a smart combination of qualitative and quantitative aspects. In order to address such complexity, we propose a knowledge-based approach that integrates the systematic computation of heterogeneous criteria with open semantics. The holistic understanding of the framework is described by a reference architecture and the proof-of-concept prototype developed can support high-level system analysis, as well as being suitable within a number of applications contexts—i.e., as a research/educational tool, communication framework, gamification and participatory modeling. Additionally, the knowledge-based philosophy, developed upon Semantic Web technology, increases the capability in terms of holistic knowledge building and re-use via interoperability. Last but not least, the framework is designed to constantly evolve in the next future, for instance by incorporating more advanced AI-powered features.

1. Introduction

Decision making as a result of system dynamics analysis requires, in practice, a straightforward and systematic modeling capability as well as a high-level of customization and flexibility to adapt to situations and environments that may vary very much from each other. The analysis of system dynamics aims to frame, understand, and discuss complex issues and problems; however, the requirements from different environments and the objective relevance of situation-specific aspects intrinsically suggest an ad hoc approach, eventually supported by some method, such as the very popular multi-criteria decision analysis (MCDA).
While, in general terms, a completely generic approach could be not as effective as ad hoc solutions, a proper adoption of modern technology may facilitate agile strategies as the result of a smart combination of qualitative and quantitative aspects. In order to address such a complexity, we propose a knowledge-based approach [1] that integrates the systematic computation of heterogeneous criteria with open semantics [2]. The underlying idea is to adopt rich semantics to provide the highest level of flexibility and adaptability to practical cases. In order to achieve such a goal, we distinguish between functional and informative (user-level) semantics. While the former class aims to align computations and system modeling, the latter wants to properly structure, integrate and present the semantics that are relevant in order to correctly understand and interpret data frameworks, in the attempt to minimize bias and uncertainty.
The holistic understanding of the framework is described by a reference architecture and the proof-of-concept prototype developed—i.e., IndShaker—can support high-level system analysis, as well as being suitable within a number of applications contexts, e.g., as a research/educational tool, communication framework, gamification [3,4] and participatory modeling [5].
Additionally, the knowledge-based philosophy, developed upon Semantic Web technology, increases the holistic capability in terms of knowledge building [6] and re-use [7] via interoperability [8]. Last but not least, the framework is designed to constantly evolve in the next future, for instance by incorporating more advanced AI-powered features.
The computational method has been previously proposed [9]. This paper focuses on the evolution and progressive consolidation of the framework after experimentation according to an agile philosophy. Indeed, the studies conducted so far have pointed out the need for interoperability [10], re-use and enriched semantics to support enhanced analysis capabilities. Such a new set of requirements was addressed by introducing a knowledge-based approach relying on Semantic Web technology.
The paper follows with a classic related work section (Section 2) and a brief discussion of background concepts (Section 3), while its core part is composed of three main sections that provide respectively an overview of the system (Section 4), a presentation of the knowledge-based approach (Section 5) and a discussion of potential applications (Section 6). Finally, two case studies are concisely proposed in Section 7.

2. Related Work

IndShaker implements a computational method previously proposed [9]. Such a method has been applied, among others, to model and analyze a case study on sustainable global development [11], as well as being further discussed in terms of bias and uncertainty management [12].
The outcome as presented in this paper results from the nexus of different bodies of knowledge, which have converged on a concrete model of analysis. Despite the many possible points of view, this work should be considered within the broad field of decision making. The emphasis is on techniques that assume multiple criteria (MCDA [13]), which are preferred over methods designed for big data [14] and over more specific approaches (e.g., naturalistic decision making [15], semantic decision making [16], and fuzzy decision making [17]).
At both a conceptual and a practical level, the key advance from the previous work by the authors is the integration of a knowledge-based approach underpinned by formal ontologies [18]. It enables an active knowledge space in which re-usable data, information and knowledge is dynamically composed via interoperability and may result also from automatic reasoning according to the most advanced models, such as Semantic Web [19] and Linked Data [20]. Additionally, the knowledge-base underpinning the tool proposed in this paper is linked to existing vocabularies (see Section 5).
More holistically, as extensively addressed throughout the paper, this contribution deals with a comprehensive analysis framework based on the enrichment of original models and underpinned by a formally specified knowledge-based approach.

3. Background Concepts

Because of the intrinsically multi-disciplinarity, the potential value provided by IndShaker should be understood in context. The proposed system assumes the following underlying concepts:
  • Multi-criteria Analysis. The target system intrinsically addresses scenarios that require more than one criterion to perform a reasonable analysis. Typical examples are, among others, situations characterized by complexity [21], wickedness [22], as well as soft systems [23]. Complexity may refer to many different contexts but it is, in general, associated with unpredictable behaviors—i.e., people behavior. Wicked problems present a significant resistance to solution and are, indeed, often considered impossible to solve because of requirements (normally incomplete, contradictory and constantly changing) and of complex dependencies, which may generate trade-off and other issues. Soft problems are usually real-world problems whose formulation is problematic, normally because they can be perceived in a different way depending on the point of view. MCDA is a classic and consolidated approach [24] that has evolved in the context of different application domains [25].
  • Evidence-based approach. The analysis strategy assumes measurable input (indicators) to establish an evidence-based approach to decision making [26].
  • Multi-perspective interpretation. Interpretation is another key factor for the target analysis as any complex scenario is somehow likely to be understood and perceived in a different way by different individuals or stakeholders. It affects above all the decision-making process (e.g., ref. [27]).
  • Heterogeneity. The information adopted to model a system that presents a certain complexity is very likely to present a certain heterogeneity that is normally requested whenever the target analysis aims to reflect or consider multiple aspects. Properly dealing with heterogeneity (e.g., ref. [28]) becomes a critical factor to create a focused analysis framework and avoid entropic or excessively biased environments.
  • Quantitative/qualitative metrics. Qualitative (e.g., ref. [29]) and quantitative (e.g., ref. [30]) methods are available for decision making. The analysis framework is based in the concept of quantitative measures. However, such a quantitative approach is integrated with qualitative aspects to enforce more contextual analysis.
  • Adaptive mechanisms. Adaptive decision making [31] is a well-known need for a generic approach, as frameworks need to adapt somehow to specific situations and contexts. The proposed solution adopts an adaptive algorithm that systematically tunes computational parameters to limit bias that may come from strong numerical differences in heterogeneous environments. A transparent view of tuning parameter contributes to avoid a “black-box” approach.
  • Dynamic analysis model. In order to assure a model of analysis that takes into account the evolution of a given system, the framework works assuming an observation interval [ t 0 , t n ] and looks at the evolution of the system from t 0 .
  • Semantics associated with data. The analysis is performed by combining numerical indicators that are semantically enriched (e.g., ref. [32]) to describe contextual and situation-specific interpretations. In the approach proposed, semantics are understood at different levels and, in general terms, may be dynamically specified or extended to reflect the analysis context.
  • Uncertainty management via transparency. Uncertainty is somehow an intrinsic factor in system analysis and decision making. It evidently applies also to MCDA [33,34]. In the context of the proposed framework, uncertainty is mostly related to the relevance associated with the different criteria and to the adaptive mechanisms, as well as to missing data. The metrics provided to estimate uncertainty contribute to a more transparent analysis environment on one side and, on the other side, may be used as a driver factor to select input data in case of multiple available choices.

4. Framework Overview

As previously introduced, IndShaker aims for the analysis of systems characterized by a certain complexity which are modeled and analyzed by combining different indicators and criteria associated with multiple potential interpretations. This section provides an overview by describing a reference architecture against the current implementation.

4.1. Reference Architecture

The Reference Architecture is depicted in Figure 1. Intuitively, it reflects the key underlying concept which assumes a systematic, yet customizable, computational framework integrated with high-level semantics to support domain-specific analysis. More concretely, in terms of software architecture, the user application distinguishes between the computational tool itself (IndShaker) and the Semantic Engine which provides an abstracted functional layer for the interpretation and management of semantics.
Such a philosophy intrinsically relies on a knowledge base and, therefore, on the capability to establish formal semantics by adopting rich data models. Semantic Web technology [19] provides a consolidated data infrastructure upon standard Web technology to enable Linked Data [20] via interoperability.
As discussed in detail later on in the paper, the proposed architectural model is composed of three different layers in terms of semantics: (i) an internal ontology supports the core computational functionalities and related semantics, (ii) a number of linked external vocabularies allow further capabilities, while, in general terms, (iii) additional customized semantics may be linked as per the common ontological approach [35]. Establishing and maintaining such a kind of knowledge environment on a large scale is definitely a challenge [36], while an application-specific focused approach such as that proposed in this paper can be considered effective and relatively easier to adopt in practice.
At a functional level, the key assumption is, on one hand, the capability of a knowledge-based tool to interact in a way that is completely transparent to final users with complex semantics and, on the other hand, the existence of agile features to integrate external data to the knowledge-base. According to this philosophy, the computational tool works on semantic data at an input and output level both, meaning input datasets are provided in an ontological format and the output is provided according to a formal ontological model to be automatically part of the knowledge base. That is a key aspect in terms of knowledge building and re-usability, as existing case studies can be analyzed, compared and eventually modified to define new ones.

4.2. IndShaker V1.0

This section addresses the current development of the tool against the reference architecture. IndShaker is an integrated component which implements the computational tool and the semantic engine as previously defined. The emphasis is on the description of the open-source package and of the key features looking at user interfaces. Additionally, the limitations of the version 1.0 are briefly discussed.

4.2.1. Open-Source Software Tool

A simplified view of the open-source software package is represented in Figure 2. The core software module is composed of five different packages. The package app implements the underlying algorithms and, both with infoPanel the graphic user interface (GUI). The I/O is managed by the functionalities provided by IOcontrollers, while the package model provides data structures. Last but not least, ontology includes functionalities related to semantics and the management of the different ontological frameworks.
The core module relies on low-level functionalities implemented by the module listeners to support user interactions. Additionally, a number of supporting tools (e.g., to generate semantically enriched datasets from raw data) are provided by the package tools.
The current implementation assumes ontology web language (OWL) and adopts two external software libraries: Pellet [37] as an OWL reasoner and JFreeChart (JFreeChart—https://www.jfree.org/jfreechart/index.html accessed on 2 July 2021) for the visual representation of computations.

4.2.2. Graphic User Interface (GUI)

In order to provide an overview of the current implementation from a user perspective, we glance at the user interface, whose main panel is proposed in Figure 3.
As shown, it includes three different sets of components for (i) the management of the input knowledge and of the knowledge base, (ii) advanced settings and (iii) input overview. The first set of functionalities aims to import the input datasets and to check/manage the associated semantics. Advanced settings are related to weights, calibration and the management of weights in terms of resources for decision making (e.g., establishing constraints). Finally, the last component provides a concise overview of the current input.
Figure 4 shows the output panel, which intuitively allows users to visualize the output of a computation and eventually to export such results into an ontological format, i.e., described according to the provided ontology.
Additionally, the platform includes a number of internal tools to support the most common users operations. Currently, the DataSet Generator (Figure 5) supports an easy conversion of raw data into an ontological format recognized by the computational component, while the Calibrator enables expert users to calibrate manually the computational tool and, eventually, to set up more complex analysis (e.g., multi-system). A third tool, the KG-Visualizer is under development and aims to visualize the computational process, including both input and output, as a knowledge graph [38] underpinned by formal ontologies.

4.2.3. Current Limitations

The implementation previously discussed is understood as a relatively mature research prototype. On one side, it supports an evolving proof of concept that allows the refining of existing functionalities and the design of further features as a response to different applications (see Section 6). On the other side, such an implementation may be used in practice as a working framework whenever a “standard” level of customization is required.
Current limitations may be understood at different levels. More concretely, most limitations concern the user interfaces. It reflects an intrinsic difficulty to generalize needs and requirements from different kind of users across the various application domains. Such limitations also affect the semantic engine as the potentiality of the ontological framework provided is just partially exploited.
At a functional level, the software misses at the moment the capability to automatically adapt to imperfect data—i.e., missed data or wrong data alignment—as well as typical functionalities, such as the capability to provide projections on hypothetical future values based on previously computed trends.
Last but not least, the current version does not distinguish between expert users, who are expected to have a technical background, and non-expert users, that need to use the tool at a very abstracted level. The former class of user can find the customization level allowed by the GUI like very limited, while the latter may find some settings too complicated. We expect researchers to approach the framework in a completely different way, as it is supposed they need the maximum possible level of customization that requires the capability to extend or modify both the semantics and the computational engine.

5. A Knowledge-Based Approach

As briefly explained in the introductory part of the paper, the added value characterizing the current version of the framework, at a both a conceptual and a practical level, is provided by the knowledge-based approach. In this section, the ontological structure that underpins computations and user-level application is analyzed in detail. First the ontology itself is described by providing, as usual, a concise overview of the main concepts and the relationships existing among them. Then, an example of the formal specification focusing on input and output knowledge is proposed.

5.1. Ontological Support: An Overview

The OWL2 implementation of the ontological support currently provided is presented in Figure 6, Figure 7 and Figure 8, while linked external vocabularies are reported in Table 1. The main classes are proposed in Figure 6, object properties in Figure 7 and attributes/annotations in Figure 8.
Table 1. Linked external ontologies.
Table 1. Linked external ontologies.
OntologyPrefixScopeReference
VirtualTableVTData Integration purpose[39]
FN-IndicatorINDSpecification of composed indicators[40]
PERSWADE-COREPERSWADEProject/Case Study description[41]
EM-OntologyEMStakeholder specification[42]
The current approach assumes each building block—i.e., input indicators and computation outputs—described as a stand-alone dataset. Those building blocks are semantically linked. For instance, a computation result is associated with the input datasets considered, both with the weight-set and the configuration parameters adopted in the computation. Additionally, the building blocks may be semantically enriched to also incorporate user-level semantics.
Figure 6. Main classes in Protege [43].
Figure 6. Main classes in Protege [43].
Modelling 04 00002 g006
Figure 7. Main object properties in Protege [43].
Figure 7. Main object properties in Protege [43].
Modelling 04 00002 g007
Figure 8. Attributes and Annotations in Protege [43].
Figure 8. Attributes and Annotations in Protege [43].
Modelling 04 00002 g008
A comprehensive fine-grained description of the ontology is out of the scope of this paper. However, in order to provide an overview of the formal specifications underpinning main building blocks, the next sub-sections address the ontological specification of the input and the output knowledge, respectively.

5.2. From Indicators to Input Knowledge

One of the primary goals of the ontology is to describe the input knowledge resulting from the integration of raw data with semantics, including also customized enrichments. It is the application of one of the key principles underlying the framework, which assumes the whole analysis process performed from customized knowledge that is dynamically defined to understand raw data in context.
By adopting such a rich vocabulary, an input indicator can be specified according to the data structure proposed in Figure 9.
The core specification assumes each indicator defined as the integrated description of numerical values with related semantics. The latter includes functional semantics (e.g., the wished trend), typical metadata (e.g., description and source) and other characterizations, such as associated categories and stakeholders.
A simplified example of specification in OWL by adopting the provided ontology is provided in Listing 1.
Listing 1. Simplified example of an input structure in OWL.
Modelling 04 00002 i001
In the example, an input indicator is mapped from a relational table, and the corresponding data structure is generated accordingly with both semantic characterizations.
Additional ad hoc semantics may be specified and integrated with the main schema though the typical mechanisms provided by the current Semantic Web technology.

5.3. Describing Target Knowledge

A simplified view of the ontological structure adopted to describe an output is proposed in Figure 10. Such a representation is conceptually more complex and articulated than the semantics associated with inputs.
An output is still considered an indicator, but it is associated with the more fine-grained concept of OutputIndicator as per the proposed ontology. In the current version, the semantic structure includes a link to the input, including indicators both with the weight-set and the calibration details used for computations. A qualitative description of the output is reported through the properties performance and interpretation, while the numerical result of the main computation is integrated with a number of supporting indicators—i.e., neutral computation and extreme computations.
The data structure also includes a number of annotations, typically generic descriptive metadata associated with the output and more specific information, such as about the method adopted to define weights. Annotations are provided in an natural language but may be automatically processed and eventually validated by users to provide further formally specified semantics according to PERSWADE-CORE [41].
A simplified example of output structure in OWL is proposed in Listing 2.
Listing 2. Simplified example of an output structure in OWL.
Modelling 04 00002 i002

6. Applications

As previously discussed, the framework has been designed according to a completely generic philosophy, which can be particularized and customized to meet requirements and needs within specific environments through the specification of semantics. Moreover, the open-source approach may provide a further level of customization, assuming very specific requirements that advise some ad hoc development or, more likely, an extension or a variant of the functionalities currently offered.
In general terms, a number of potential applications were identified as follows:
  • Decision Making/System Analysis. It is the most generic possible understanding of the framework. Decision making is performed as a systematic analysis of system dynamics, which result by the combination of independent indicators. Such an approach becomes valuable and practical in the presence of a significant heterogeneity, as well as allowing the specification of ad hoc semantics to enforce transparency and, in the limit of the possible, to minimize bias.
  • Communication Framework. The current focus, that includes both quantitative and qualitative aspects, can potentially contribute to enhance the proper communication of a given result, assessment or analysis. For instance, storytelling [44] may be empowered by adopting an effective visualization based on numerical indicators and trends integrated with user-level semantics.
  • Gamification. Similarly, the framework can underpin gamification strategies [4] at multiple levels in different context to achieve different goals. Some of the features already available, such as the possibility to define constraints for weights, are intrinsically suitable to gamification.
  • Research Tool. The current application in the field of Sustainable Global Development previously mentioned is a clear example of use of the framework as a research tool. Indeed, the framework is expected to facilitate system modeling though indicators and semantics and to support the formulation of research questions related to the target system assessment.
  • Educational purpose. Intuitively, applications within the education domain follow the same mainstream and underlying principles of research, as case studies can be modeled from available data and analysis/assessment can be performed accordingly. A gamified approach to learning [45] could be a further added value.
  • Stakeholders Analysis in Complex Environments. Stakeholders analysis [46] may become challenging in complex environments where unpredictable behaviors can potentially meet contrasting interests and resulting trade-offs. Upon data availability, IndShaker may integrate a quantitative dimension of analysis with qualitative ones (e.g., ref. [42]).
  • Participatory Modeling. Decision-making and knowledge-building processes that require or involve multiple stakeholders [5] can be supported by providing a knowledge-based resource to process heterogeneous data in context.

7. Evaluation

Because of its generic focus, the analysis framework is potentially suitable within a significant range of potential applications in different domains. Additionally, final users could have very different background, technical skills and concrete application-specific needs. Therefore, in such a context, a formal validation process is hard to implement and, even assuming it is possible, would address a very specific scope only with an objective difficulty to generalize conclusions.
In order to informally evaluate the analysis framework in a realistic and focused way, we propose two different case studies based on global indicators in the field of sustainable development. As briefly explained in the next two subsections, the tool allows to enable in practice an easy model of a complex system from raw data just setting a couple of key parameters.
As a remark, the kind of study proposed is very sensitive of the indicators selection. In the examples proposed, we considered a minimum number of indicators to facilitate the understanding of value provided by the analysis framework. Because of the systematic approach, such a value is expected to increase with the scale and complexity of the system to model.

7.1. Case Study 1: Global Socio-Economic Growth

In this first case study, we aim to assess economic growth from a social perspective. In order to achieve such a goal, we selected four global indicators (Table 2): one of them (the classic GDP per capita) represents purely economic trends, while the other ones reflect different social dynamics (e.g., employment and health). The trend for individual indicators is shown in Figure 11.
Table 2 reports the key parameters. The wished trend is "increasing” for the GDP and life expectancy that we would like ideally to increase as much as possible in a positive scenario, while the unemployment rate and the children mortality should ideally decrease. Weights are set according to a logic that does not give value to the economic growth itself, but rather to its social impact. Among social indicators, children mortality is considered the most critical one.
Analysis results are reported in Figure 12 and clearly point out positive performance for the resulting system despite a pessimistic weighting. Indeed, the system performance is lower than the performance assuming “neutral computation”—i.e., the same average weight for all indicators. However, it is well over the lower extreme computation that assumes the worst possible weighting.

7.2. Case Study 2: Assessing Economic Growth

Additionally, in this second case study, we want to perform a contextual analysis of economic growth by analyzing purely economic trends (GDP) looking at expenditure in the different sectors. Ideally, we would like to appreciate an increasing investment in education and healthcare, as well as a decreasing military expenditure. A summary of the key parameters is proposed in Table 3. Like in the previous case, the economic growth itself plays a limited role, while all expenditure indicators are associated with the same (high) relevance.
The input data trends in the considered time frame (2005–2017) are reported in Figure 13 while results are shown in Figure 14. The computations are qualitatively consistent with the results of the previous scenario, even presenting different quantitative values.

8. Conclusions and Future Work

IndShaker models a generic system as a combination of heterogeneous indicators. By analyzing the resulting dynamics in context through the specification of ad hoc semantics, the framework provides an extensive support to complex system analysis.
The knowledge-based approach enables a self-contained, yet open, environment that aims at knowledge building, analysis and re-use via interoperability. While the underlying ontological framework developed upon Semantic Web technology establishes an extensible semantic environment, as well as a high level of abstraction to address users without a technological background, the software tool provides capabilities for a more consistent level of customization for expert users according to an open-source philosophy.
Within the broad area of system analysis and decision making, a number of applications were potentially identified and include, among others, communication and story-telling, academic purpose, including research and teaching, complex system analysis and participatory modeling.
The current implementation focuses on core functionalities and presents the limitations typical of research prototypes. However, as the discussion of the presented case studies demonstrates, it is mature enough to support agile modeling in the domain. Future works aims at a further development, including also AI-powered features that are currently the object of research.

Funding

This research received no external funding.

Data Availability Statement

Not applicable.

Acknowledgments

The author would like to thank the anonymous reviewers for the constructive feedback that has allowed a natural improvement of the paper.

Conflicts of Interest

The author declares no conflict of interest.

References

  1. Yim, N.H.; Kim, S.H.; Kim, H.W.; Kwahk, K.Y. Knowledge based decision making on higher level strategic concerns: System dynamics approach. Expert Syst. Appl. 2004, 27, 143–158. [Google Scholar] [CrossRef]
  2. Blomqvist, E. The use of Semantic Web technologies for decision support—A survey. Semant. Web 2014, 5, 177–201. [Google Scholar] [CrossRef]
  3. Hamari, J.; Koivisto, J.; Sarsa, H. Does gamification work?—A literature review of empirical studies on gamification. In Proceedings of the 2014 47th Hawaii International Conference on System Sciences, Washington, DC, USA, 6–9 January 2014; pp. 3025–3034. [Google Scholar]
  4. Seaborn, K.; Fels, D.I. Gamification in theory and action: A survey. Int. J. Hum.-Comput. Stud. 2015, 74, 14–31. [Google Scholar] [CrossRef]
  5. Basco-Carrera, L.; Warren, A.; van Beek, E.; Jonoski, A.; Giardino, A. Collaborative modelling or participatory modelling? A framework for water resources management. Environ. Model. Softw. 2017, 91, 95–110. [Google Scholar] [CrossRef]
  6. Cardoso, J. The semantic web vision: Where are we? IEEE Intell. Syst. 2007, 22, 84–88. [Google Scholar] [CrossRef]
  7. Fernández, M.; Overbeeke, C.; Sabou, M.; Motta, E. What makes a good ontology?A case-study in fine-grained knowledge reuse. In Proceedings of the Asian Semantic Web Conference, Shanghai, China, 7–9 December 2009; Springer: Berlin/Heidelberg, Germany, 2009; pp. 61–75. [Google Scholar]
  8. García-Castro, R.; Gómez-Pérez, A. Interoperability results for Semantic Web technologies using OWL as the interchange language. J. Web Semant. 2010, 8, 278–291. [Google Scholar] [CrossRef]
  9. Pileggi, S.F. Is the World Becoming a Better or a Worse Place? A Data-Driven Analysis. Sustainability 2020, 12, 88. [Google Scholar] [CrossRef] [Green Version]
  10. Zacharewicz, G.; Daclin, N.; Doumeingts, G.; Haidar, H. Model driven interoperability for system engineering. Modelling 2020, 1, 94–121. [Google Scholar] [CrossRef]
  11. Pileggi, S.F. Life before COVID-19: How was the World actually performing? Qual. Quant. 2021, 55, 1871–1888. [Google Scholar] [CrossRef]
  12. Pileggi, S.F. Combining Heterogeneous Indicators by Adopting Adaptive MCDA: Dealing with Uncertainty. In Proceedings of the International Conference on Computational Science, Krakow, Poland, 16–18 June 2021; Springer: Berlin/Heidelberg, Germany, 2021; pp. 514–525. [Google Scholar]
  13. Ishizaka, A.; Nemery, P. Multi-Criteria Decision Analysis: Methods and Software; John Wiley & Sons: Hoboken, NJ, USA, 2013. [Google Scholar]
  14. Gandomi, A.; Haider, M. Beyond the hype: Big data concepts, methods, and analytics. Int. J. Inf. Manag. 2015, 35, 137–144. [Google Scholar] [CrossRef]
  15. Klein, G. Naturalistic decision making. Hum. Factors 2008, 50, 456–460. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  16. Jiang, Y.; Liu, H.; Tang, Y.; Chen, Q. Semantic decision making using ontology-based soft sets. Math. Comput. Model. 2011, 53, 1140–1149. [Google Scholar] [CrossRef]
  17. Blanco-Mesa, F.; Merigó, J.M.; Gil-Lafuente, A.M. Fuzzy decision making: A bibliometric-based review. J. Intell. Fuzzy Syst. 2017, 32, 2033–2050. [Google Scholar] [CrossRef] [Green Version]
  18. Guarino, N.; Oberle, D.; Staab, S. What is an ontology? In Handbook on Ontologies; Springer: Berlin/Heidelberg, Germany, 2009; pp. 1–17. [Google Scholar]
  19. Berners-Lee, T.; Hendler, J.; Lassila, O. The semantic web. Sci. Am. 2001, 284, 34–43. [Google Scholar] [CrossRef]
  20. Bizer, C.; Heath, T.; Berners-Lee, T. Linked data: The story so far. In Semantic Services, Interoperability and Web Applications: Emerging Concepts; IGI Global: Hershey, PA, USA, 2011; pp. 205–227. [Google Scholar]
  21. Ottino, J.M. Engineering complex systems. Nature 2004, 427, 399. [Google Scholar] [CrossRef] [PubMed]
  22. Elia, G.; Margherita, A. Can we solve wicked problems? A conceptual framework and a collective intelligence system to support problem analysis and solution design for complex social issues. Technol. Forecast. Soc. Change 2018, 133, 279–286. [Google Scholar] [CrossRef]
  23. Checkland, P.; Poulter, J. Soft systems methodology. In Systems Approaches to Making Change: A Practical Guide; Springer: Berlin/Heidelberg, Germany, 2020; pp. 201–253. [Google Scholar]
  24. Roy, B.; Vincke, P. Multicriteria analysis: Survey and new directions. Eur. J. Oper. Res. 1981, 8, 207–218. [Google Scholar] [CrossRef]
  25. Velasquez, M.; Hester, P.T. An analysis of multi-criteria decision making methods. Int. J. Oper. Res. 2013, 10, 56–66. [Google Scholar]
  26. Baba, V.V.; HakemZadeh, F. Toward a theory of evidence based decision making. Manag. Decis. 2012, 50, 832–867. [Google Scholar] [CrossRef]
  27. Cheung, C.F.; Lee, W.B.; Wang, W.M.; Chu, K.; To, S. A multi-perspective knowledge-based system for customer service management. Expert Syst. Appl. 2003, 24, 457–470. [Google Scholar] [CrossRef]
  28. Li, G.; Kou, G.; Peng, Y. A group decision making model for integrating heterogeneous information. IEEE Trans. Syst. Man Cybern. Syst. 2016, 48, 982–992. [Google Scholar] [CrossRef]
  29. Brugha, C.M. The structure of qualitative decision-making. Eur. J. Oper. Res. 1998, 104, 46–62. [Google Scholar] [CrossRef]
  30. Chen, W.H. Quantitative decision-making model for distribution system restoration. IEEE Trans. Power Syst. 2009, 25, 313–321. [Google Scholar] [CrossRef]
  31. Glöckner, A.; Hilbig, B.E.; Jekel, M. What is adaptive about adaptive decision making? A parallel constraint satisfaction account. Cognition 2014, 133, 641–666. [Google Scholar] [CrossRef] [PubMed]
  32. Kunze, C.; Hecht, R. Semantic enrichment of building data with volunteered geographic information to improve mappings of dwelling units and population. Comput. Environ. Urban Syst. 2015, 53, 4–18. [Google Scholar] [CrossRef]
  33. Durbach, I.N.; Stewart, T.J. Modeling uncertainty in multi-criteria decision analysis. Eur. J. Oper. Res. 2012, 223, 1–14. [Google Scholar] [CrossRef]
  34. Stewart, T.J.; Durbach, I. Dealing with uncertainties in MCDA. In Multiple Criteria Decision Analysis; Springer: Berlin/Heidelberg, Germany, 2016; pp. 467–496. [Google Scholar]
  35. Guarino, N. Formal ontology, conceptual analysis and knowledge representation. Int. J. Hum.-Comput. Stud. 1995, 43, 625–640. [Google Scholar] [CrossRef] [Green Version]
  36. Shvaiko, P.; Euzenat, J. Ontology matching: State of the art and future challenges. IEEE Trans. Knowl. Data Eng. 2011, 25, 158–176. [Google Scholar] [CrossRef] [Green Version]
  37. Sirin, E.; Parsia, B.; Grau, B.C.; Kalyanpur, A.; Katz, Y. Pellet: A practical owl-dl reasoner. Web Semant. Sci. Serv. Agents World Wide Web 2007, 5, 51–53. [Google Scholar] [CrossRef]
  38. Paulheim, H. Knowledge graph refinement: A survey of approaches and evaluation methods. Semant. Web 2017, 8, 489–508. [Google Scholar] [CrossRef] [Green Version]
  39. Pileggi, S.F.; Crain, H.; Yahia, S.B. An Ontological Approach to Knowledge Building by Data Integration. In Proceedings of the International Conference on Computational Science, Amsterdam, The Netherlands, 3–5 June 2020; Springer: Berlin/Heidelberg, Germany, 2020; pp. 479–493. [Google Scholar]
  40. Pileggi, S.F.; Hunter, J. An ontological approach to dynamic fine-grained Urban Indicators. Procedia Comput. Sci. 2017, 108, 2059–2068. [Google Scholar] [CrossRef]
  41. Pileggi, S.F.; Voinov, A. PERSWADE-CORE: A core ontology for communicating socio-environmental and sustainability science. IEEE Access 2019, 7, 127177–127188. [Google Scholar] [CrossRef]
  42. Pileggi, S.F. Knowledge interoperability and re-use in Empathy Mapping: An ontological approach. Expert Syst. Appl. 2021, 180, 115065. [Google Scholar] [CrossRef]
  43. Musen, M.A. The protégé project: A look back and a look forward. AI Matters 2015, 1, 4–12. [Google Scholar] [CrossRef] [Green Version]
  44. Kosara, R.; Mackinlay, J. Storytelling: The next step for visualization. Computer 2013, 46, 44–50. [Google Scholar] [CrossRef] [Green Version]
  45. Dicheva, D.; Dichev, C.; Agre, G.; Angelova, G. Gamification in education: A systematic mapping study. J. Educ. Technol. Soc. 2015, 18, 75–88. [Google Scholar]
  46. Brugha, R.; Varvasovszky, Z. Stakeholder analysis: A review. Health Policy Plan. 2000, 15, 239–246. [Google Scholar] [CrossRef]
  47. The World Bank—GDP per Capita (Current US$). Available online: https://data.worldbank.org/indicator/NY.GDP.PCAP.CD (accessed on 9 May 2022).
  48. The World Bank—Unemployment, Total (% of Total Labor Force). Available online: https://data.worldbank.org/indicator/SL.UEM.TOTL.ZS (accessed on 10 May 2022).
  49. The World Bank—Life Expectancy at Birth, Total (Years). Available online: https://data.worldbank.org/indicator/SP.DYN.LE00.IN (accessed on 10 May 2022).
  50. The World Bank—Mortality Rate, under-5 (per 1000 Live Births). Available online: https://data.worldbank.org/indicator/SH.DYN.MORT (accessed on 10 May 2022).
  51. The World Bank—Military Expenditure (% of GDP). Available online: https://data.worldbank.org/indicator/MS.MIL.XPND.GD.ZS (accessed on 10 May 2022).
  52. The World Bank—Government Expenditure on Education, Total (% of GDP). Available online: https://data.worldbank.org/indicator/SE.XPD.TOTL.GD.ZS (accessed on 10 May 2022).
  53. The World Bank—Hospital Beds (per 1000 People). Available online: https://data.worldbank.org/indicator/SH.MED.BEDS.ZS (accessed on 10 May 2022).
Figure 1. Reference Architecture.
Figure 1. Reference Architecture.
Modelling 04 00002 g001
Figure 2. Open-source software package.
Figure 2. Open-source software package.
Modelling 04 00002 g002
Figure 3. GUI—main panel.
Figure 3. GUI—main panel.
Modelling 04 00002 g003
Figure 4. GUI—output.
Figure 4. GUI—output.
Modelling 04 00002 g004
Figure 5. GUI—dataset generator.
Figure 5. GUI—dataset generator.
Modelling 04 00002 g005
Figure 9. Specification of an input indicator.
Figure 9. Specification of an input indicator.
Modelling 04 00002 g009
Figure 10. Ontological structure describing the output.
Figure 10. Ontological structure describing the output.
Modelling 04 00002 g010
Figure 11. Input data trend for the time range 1991–2020 (Case Study 1).
Figure 11. Input data trend for the time range 1991–2020 (Case Study 1).
Modelling 04 00002 g011
Figure 12. Analysis result (Case Study 1). The system performance is represented by the blue line, while the continuous green line refers to the neutral computation. Dashed lines are extreme computations.
Figure 12. Analysis result (Case Study 1). The system performance is represented by the blue line, while the continuous green line refers to the neutral computation. Dashed lines are extreme computations.
Modelling 04 00002 g012
Figure 13. Input data trend for the time range 2005–2017 (Case Study 2).
Figure 13. Input data trend for the time range 2005–2017 (Case Study 2).
Modelling 04 00002 g013
Figure 14. Analysis result (Case Study 2). The system performance is represented by the blue line, while the continuous green line refers to the neutral computation. Dashed lines are extreme computations.
Figure 14. Analysis result (Case Study 2). The system performance is represented by the blue line, while the continuous green line refers to the neutral computation. Dashed lines are extreme computations.
Modelling 04 00002 g014
Table 2. Raw data and parametrization (Case Study 1).
Table 2. Raw data and parametrization (Case Study 1).
IndicatorSourceCategoryWished TrendWeight
GDP x capita[47]EconomyIncreasing2/10
Unemployment Rate[48]Socio-economicDecreasing5/10
Life Expectancy[49]HealthIncreasing5/10
Children Mortality[50]HealthDecreasing8/10
Table 3. Raw data and parametrization (Case Study 2).
Table 3. Raw data and parametrization (Case Study 2).
IndicatorSourceCategoryWished TrendWeight
GDP x capita[47]EconomyIncreasing2/10
Military expenditure (% of GDP)[51]OtherDecreasing8/10
Gov. expenditure on education (% of GDP)[52]SocialIncreasing8/10
Hospital beds (x 1000 people)[53]HealthIncreasing8/10
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Pileggi, S.F. IndShaker: A Knowledge-Based Approach to Enhance Multi-Perspective System Dynamics Analysis. Modelling 2023, 4, 19-34. https://doi.org/10.3390/modelling4010002

AMA Style

Pileggi SF. IndShaker: A Knowledge-Based Approach to Enhance Multi-Perspective System Dynamics Analysis. Modelling. 2023; 4(1):19-34. https://doi.org/10.3390/modelling4010002

Chicago/Turabian Style

Pileggi, Salvatore Flavio. 2023. "IndShaker: A Knowledge-Based Approach to Enhance Multi-Perspective System Dynamics Analysis" Modelling 4, no. 1: 19-34. https://doi.org/10.3390/modelling4010002

Article Metrics

Back to TopTop