Next Article in Journal
Using a Random Forest Model to Predict the Location of Potential Damage on Asphalt Pavement
Previous Article in Journal
Biochemical Mapping of the Inflamed Human Dental Pulp
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Review

Methods for Weighting Decisions to Assist Modelers and Decision Analysts: A Review of Ratio Assignment and Approximate Techniques

by
Barry Ezell
1,*,
Christopher J. Lynch
1 and
Patrick T. Hester
2
1
Virginia Modeling, Analysis and Simulation Center, Old Dominion University, Suffolk, VA 23435, USA
2
Modus Operandi Inc., Melbourne, FL 32901, USA
*
Author to whom correspondence should be addressed.
Appl. Sci. 2021, 11(21), 10397; https://doi.org/10.3390/app112110397
Submission received: 11 September 2021 / Revised: 25 October 2021 / Accepted: 2 November 2021 / Published: 5 November 2021
(This article belongs to the Section Computing and Artificial Intelligence)

Abstract

:
Computational models and simulations often involve representations of decision-making processes. Numerous methods exist for representing decision-making at varied resolution levels based on the objectives of the simulation and the desired level of fidelity for validation. Decision making relies on the type of decision and the criteria that is appropriate for making the decision; therefore, decision makers can reach unique decisions that meet their own needs given the same information. Accounting for personalized weighting scales can help to reflect a more realistic state for a modeled system. To this end, this article reviews and summarizes eight multi-criteria decision analysis (MCDA) techniques that serve as options for reaching unique decisions based on personally and individually ranked criteria. These techniques are organized into a taxonomy of ratio assignment and approximate techniques, and the strengths and limitations of each are explored. We compare these techniques potential uses across the Agent-Based Modeling (ABM), System Dynamics (SD), and Discrete Event Simulation (DES) modeling paradigms to inform current researchers, students, and practitioners on the state-of-the-art and to enable new researchers to utilize methods for modeling multi-criteria decisions.

1. Introduction

Decision-making processes exist in many forms within computational models and simulations, and varied needs drive the scope of how decision-making is represented across modeling paradigms. Models are commonly developed under a specified context or experimental frame [1,2], verified, validated, and tested under that perspective [3,4], and decision-making representations’ forms are dependent upon the utilized perspective. The desired levels of realism and aggregation for a model influences its form and impacts the insights that can be gleaned from a model when simulated. Depending on the modeled system, decisions can be responsible for dynamically altering the structure of the simulated environment, modifying behaviors or goals of simulated entities, ascertaining group membership selections, and determining the results of actions or interactions.
Many factors can lead to differences in the desired type of decision-making representation utilized within a model, including: differences in the identified relevant model context [5,6,7]; differences in stakeholder perspectives [8,9,10]; differing perspectives on the importance of rare events versus likely outcomes [11]. Keeney and Raiffa [12] describe decision analysis as a “prescriptive approach...to think hard and systematically about some important real problems” [12]. Thus, at its core, it helps us to understand how we should make decisions. It is the formal process of choosing from among a candidate set of decisions to determine which alternative is most valuable to the decision maker. Complicating most real decisions is that we wish to achieve multiple aims; that is, we evaluate candidate solutions based on a potentially large number of criteria. For an apartment, we may consider cost, size, location, and amenities, whereas a job selection problem may cause us to consider salary, growth potential, location, and benefits. Each of these criteria is important to us and we need to consider them comprehensively to evaluate our problem using multi-criteria decision analysis (MCDA).
MCDA is already being applied to assist in modeling and simulating systems involving decisions made by individuals or groups. MCDA has been widely applied for natural resource management as it provides a structured approach for integrating key management factors, captures the multi-functional uses of forests, and accounts for multiple stakeholder perspectives on how to best manage the forest [13]. With respect to public health safety, MCDA has been utilized to explore preventive programs for the prevention of Lyme disease [14] and to assess programs for preventing the spread of West Nile virus [15]. Scholten, Maurer [16] compares the use of MCDA models against integrated assessment models in identifying alternatives for long term water supply planning at a town scale. Their study finds that all the models identified provided better performance than the current water supply system; however, the MCDA models also provided better value ranges and formed better bases for discussion than the integrated assessment models.
We review and present a representative sample of commonplace techniques within MCDA. Our selection process considers techniques that are common for situations where there exist only a few attributes as well as situations where there may be many attributes. Situations pertaining to the representation of only a few attributes lend themselves to ratio assignment techniques, while scenarios with many attributes lend themselves to approximate techniques. The techniques reviewed and presented in our taxonomy rely on expert judgement, have been used in practice for over 30 years, and have been utilized in the personal experience of the authors in many research projects [17,18,19,20]. We discuss how these MCDA techniques can be utilized for modeling decision making within three common modeling paradigms. Our objective is to improve understanding of how approximate and ratio assignment techniques can be used to expand the existing decision modeling toolboxes within these modeling paradigms, as well as the circumstances under which the techniques are applicable.
Modeling paradigms represent decision making in a variety of ways, capture decisions at different levels of granularity, and generate different responses with respect to how a decisions’ outcome impacts a simulation. For instance, System Dynamics (SD) represent decisions based on nonlinear population behaviors aggregated as flow rates over time [21]. Decision making is reflected at the system level through information feedback and delays [22]. Discrete Event Simulation (DES) captures decisions at the system design level [23,24]. The DES decision processes represent the aggregated options for how entities traverse within the modeled system [25,26]. Agent Based Models (ABM) represent decision-making at the individual level [27], with agents’ decisions based on their goals and their current states. Aggregate system behavior is examined based on how the collective interactions lead to system level behaviors over time [28,29]. Methods for representing decisions include the use of rules [30,31], knowledge architectures [32], state charts [33,34], temporal belief logic [35], decision nodes [25], and decision trees [36], to name a few options. Time independent paradigms such as Markov chains, Bayesian inference, Petri Nets, and Hidden Markov Models can provide instantaneous decision selections based on the current state of known information without required time dependencies [37,38,39]. Additionally, model stakeholders and model builders can arrive at different validity constraints based on the model context combined with their own experiences [6]. This can lead to different preferences for how decision-making should be specified within a simulation.
Many techniques have been established for modeling decisions and selecting the appropriate technique should involve examining how the decision is made within the real system [21,36,40,41,42,43]. This involves examining how decisions are made and knowing what the set of possible decision options includes. The literature on individual techniques and MCDA in general is vast. Multi-criteria decision models have been applied to study fall protection support for construction sites [44], temperature-aware routing in wireless body area networks [45], performance assessment of credit granting decision systems [46], assessment of player rankings in E-Sports [47], load profiling for power systems [48], identification of ideal business location selection [49], performance of emergency systems under COVID-19, [50] venture investment [51], failure modes analysis [52], group decision making [53], drug trafficking [54], and remote sensing for drought characterization [55]. This article focuses on summarizing common weighting methods, including their advantages and disadvantages, to aid the reader in the determination of an appropriate method for use given the particulars of a given decision to be made. This summary is extended to discuss the applicability of MCDA techniques for use in decision making within a sample of commonly utilized modeling paradigms. Research and practical application have shown that additive models are the most extensively used model in multi-criteria decision analysis [56]. However, a review of these techniques uses and applications within M&S has not been conducted. We provide an assessment of the state-of-the-art of MCDA weighting methods, as well as a comparison analysis of the use of these methods in the context of a realistic problem. Throughout this article, any person, such as model builders, model stakeholders, risk managers, engineers, and decision makers, eliciting attribute weights is referred to as user.

2. Materials and Methods

MCDA assumes preferential independence among criteria and many weighting methods are built on the assumption of the use of an additive value function. Identifying appropriate means of calculating the weightings of criteria involved in a decision, such as sampling from a uniform distribution [57] or the use of rankings [58], is important for differentiating the category of MCDA technique that is suitable for the simulation. As such, we conduct our assessment under the assumption that an additive preference model of the form provided in Equation (1) is being utilized to inform decision making.
v x = i = 1 n w i v i x i
where n is the total number of criteria being considered, wi is the weight of the i-th criteria, vi(xi) is the i-th value function, and x is the vector of all criteria values. Within a value function, all weights must add up to 1:
i = 1 n w i = 1
The use of an additive utility model requires that criteria used in the model are mutually preferentially independent [59]. This means that the weight assigned to any particular criterion is not influenced by the weight assigned to any other attributes. For example, consider choosing between departure times of 6 and 10 a.m. for a flight and their respective costs are $250 and 300. Mutually preferentially independent means that you prefer the cheaper flight to the more expensive one regardless of departure time, and you prefer the later flight to the earlier one regardless of cost. If you prefer the later flight regardless of the ticket price, however, the price dictates your departure preference, then departure time is preferentially independent of cost, but they are not mutually preferentially independent. If the attributes are not mutually preferential independent, there are techniques to combine them using a joint utility function that is beyond the scope of this paper. Additionally, all criteria should be mutually exclusive and collectively exhaustive, that is, they represent the entirety of relevant criteria and each is independent of all others.
A taxonomy has been developed to categorize MCDA techniques to help in identifying and conveying the similarities and differences in uses, benefits, and limitations of each of the techniques. We concentrate on two general approaches for assigning weights to preference attributes within the context of a multi attribute utility model: ratio assignment and approximate techniques. Figure 1 provides the two primary classifications of techniques within MCDA, with each followed by four technique classifications.
Within each branch of the taxonomy, we have selected the techniques which are most usable in practice. Note that some techniques may be better but are unwieldy. Within each category, the techniques are ordered first by direct and then indirect methods, and then notionally from easiest for the decision maker to implement and use to the more difficult weighting methods, requiring more time and resources to set up the weights for the attributes. Borcherding, Eppel [60] suggest that the ratio, swing, tradeoff, and pricing out methods are most commonly used in practice for MCDA. However, more recently, researchers have focused on direct methods for determining weights, including equal and rank-order weighting methods. Weights are often obtained judgmentally with indirect methods [61]; therefore, direct methods that remove some of the subjectivity while determining appropriate weights have become increasingly popular.
The difference between ratio assignment and approximate techniques lies in the nature of the questions posed to elicit weights. Ratio assignment techniques assign a score to each attribute based on its absolute importance relative to a standard reference point or relative importance with respect to other attributes. The resulting weights are obtained by taking the ratio of each individual attribute score to the sum of the scores across all attributes. Approximate techniques assign an approximate weight to each attribute, strictly according to their ranking relative to other attributes with respect to importance. Approximate techniques appeal to principles of order statistics to justify weights in the absence of additional information on relative preference.
As observed in the literature over the past three decades, there are several important pitfalls to be aware of when assigning attribute weights as described below:
  • Objective and attribute structure. The structure of the objectives and the selection of weighting methods affect results and should be aligned to avoid bias;
  • Attribute definitions affect weighting. The detail with which certain attributes are specified affects the weight assigned to them; that is, the division of an attribute can increase or decrease the weight of an attribute. For example, weighing price, service level, and distance separately as criteria for a mechanic selection led to different results than weighing shop characteristics (comprised of price and service level) and distance did [62];
  • Number of attributes affects method choice. It is very difficult to directly or indirectly weight when one has to consider many attributes (e.g., double digits or more), owing to the greater difficulty associated with answering all the questions needed for developing attribute weights; Miller [63] advocates the use of five to nine attributes to avoid cognitive overburden;
  • More attributes are not necessarily better. As the number of attributes increases, there is a tendency for the weights to equalize, meaning that it becomes harder to distinguish the difference between attributes in terms of importance as the number of significant attributes increases [64];
  • Attribute dominance. If one attribute is weighted heavier than all other attributes combined, the correlation between the individual attribute score and the total preference score approaches one;
  • Weights compared within but not among decision frameworks. The interpretation of an attribute weight within a particular modeling framework should be the same regardless of the method used to obtain weights [65]; however, the same consistency in attribute weighting cannot be said to be present across all multi-criteria decision analysis frameworks [66];
  • Consider the ranges of attributes. People tend to neglect accounting for attribute ranges when assigning weights using weighting methods that do not stress them [56,67]; rather, these individuals seem to apply some intuitive interpretation of weights as a very generic degree of importance of attributes, as opposed to explicitly stating ranges, which is preferred [68,69,70]. This problem could occur when evaluating job opportunities. People may assume that salary is the most important factor, however, if the salary range is very narrow (e.g., a few hundred dollars), then other factors such as vacation days or available benefits may in fact be more important in the decision maker’s happiness.
There is no superior method for eliciting attribute weights, independent of a problem’s context. Consequently, users should be aware of how each method works, its drawbacks and advantages, the types of questions asked by the method, how these answers are used to generate weights, and how different the weights might be if other methods are used. Peer reviewers should be mindful of how each of these methods for eliciting attribute weights are used in practice and how users of these methods interpret the results. The specific method for eliciting attribute weights itself is not the only ingredient in stimulating the discussion. The weighting methods are only tools used in the analysis, and one should focus on the process for how the weights are used [65].
Using the defined taxonomy from Figure 1, we next evaluate the characteristics of each of the specified MCDA classifications. The strengths and weaknesses of each categorization are explored and the criteria for using the techniques are presented. Examples of use are discussed in order to convey the context under which the techniques are applicable. This is followed by a discussion of each techniques’ potential uses within the purview of computational modeling within the paradigms of ABM, DES, and SD.

3. Results

3.1. Ratio Assignment Techniques

Ratio Assignment Techniques ask decision makers questions where answers imply a set of weights corresponding to the user’s subjective preferences. The result of this questioning is a set of scores, or points, assigned to each attribute from which the corresponding weights are calculated after normalizing each attribute score with respect to the total score across all attributes. A variety of ratio assignment techniques exist, including: (1) direct assignment technique (DAT), (2) simple multi attribute rating technique (SMART) and its variants, (3) swing weight technique (SWING), and (4) simple pairwise comparison (PW). Each is discussed in the following subsections. The method is introduced, its steps are described, and then its strengths and limitations are presented.

3.1.1. Direct Assignment Technique (DAT)

The Direct Assignment Technique (DAT) asks users to assign weights or scores directly to preference attributes. For example, the user may need to divide a fixed pot of points (e.g., 100) among the attributes. Alternatively, users may also be asked to score each attribute over some finite scale (e.g., 0 to 100) and the resulting weights are then calculated by taking the ratio of individual scores to the total score among all attributes.
The Direct Assignment Technique is comprised of the following two steps: (1) assign points to each attribute, and (2) normalize the points such that the total is equal to one.

DAT Step 1: Assign Points to Each Attribute

One of two approaches can be adopted for completing this step. The first approach considers a fixed pot of points and asks users to divide the pot among the attributes where attributes of greater importance receive higher scores than those of lesser importance. For example, if the total pot consists of 100 points, users would assign a portion of this total among the set of attributes.
The second approach considers a finite range of potential scores and asks user to assign a point value to each attribute according to its importance, where higher importance attributes receive more points than those of lesser importance. For example, if a range of scores ranging from 0 to 100 is considered, users would choose a point value between these limits to establish the relative importance among attributes.
As previously mentioned, it is important that an objective is established and that the ranges (swing) for each attribute are defined; therefore, a common example will be used throughout this paper. We will consider the purchase of a car for a small family, early in their careers, with one small child and a short commute. One could see that the relative weighting of attributes might change if the problem definition changed, e.g., if the decision maker had a long commute or large family.
The attributes will use notional ranges for the remainder of this paper. Again, the relative weight that a decision maker would apply may be impacted by the range. A narrow purchase price range of $20,000 to $20,200 would have less importance than that of a larger range. The criteria used for the analysis of this choice are Purchase Price, Attractiveness, Reliability, Gas Mileage, and Safety Rating. Assume a fixed pot of 1000 points to be divided among the five attributes. These criteria, their abbreviations, their least and most preferred value, and scores are shown in Table 1.

DAT Step 2: Calculate Weights

Using the point scores assigned to each of the attributes in the previous step, the second step of the Direct Assignment Technique is to calculate attribute weights. This is done by normalizing each attribute score against the total score among all attributes as shown in Equation (3).
w i = S i / j S j
In the car buying example, based on a fixed budget of points, the weights for each attribute can be readily calculated using Equation (3). This result is shown in Table 2.

Strengths of This Approach

This approach is the most straightforward of the techniques presented in this paper for eliciting attribute weights in that it does not require the user to formally establish a rank order of attributes a priori. The number of questions needed to assign weights using the direct assignment technique is equal to the number of preference attributes. Thus, the effort required to obtain attribute weights scales linearly with the number of attributes.

Limitations of This Approach

Weights determined using the first approach (divide the pot) must be recalculated if new attributes are added or old ones are removed. However, this limitation does not apply to the second approach (allocation of absolute points). The second approach for assigning weights (allocation of absolute points) is performed without reference to any particular reference point. Yet, establishing a reference point is something humans need to do in order to make quantitative comparisons. For example, without a specified reference point, the user may use his or her best judgment to define what a particular score means (e.g., 0, 50, or 100 on a 100-point scale) and use one or more of these as a basis for assigning scores to attributes. Unfortunately, this approach is sensitive to the chosen reference point and assigned definition, and may produce weights that differ widely between users. One approach to alleviate this limitation is to establish a well-defined constructed scale showing what different scoring levels mean.

3.1.2. Simple Multi Attribute Rating Technique (SMART)

The Simple Multi Attribute Rating Technique (SMART) [71,72] is an approach for determining weighting factors indirectly through systematic comparison of attributes against the one deemed to be the least important. SMART consists of two general activities: (1) rank order attributes according to the relative importance overall, and (2) select either the least or most important attribute as a reference point and assess how much more or less important the other attributes are with respect to the reference point. This step involves calculating attribute weights from ratios of individual attribute scores to the total score across all attributes.
Methodological improvements to SMART, known as SMARTS and SMARTER, were proposed by Edwards and Barron (73). SMARTS (SMART using Swings) uses linear approximations to single-dimension utility functions, an additive utility model, and swing weights to improve weighting [73]. SMARTER (SMART Exploiting Ranks) builds on SMARTS but substitutes the second of the SMARTS swing weighting steps, instead using calculations based on ranks.
The SMART technique is comprised of the following four steps: (1) rank order attributes, (2) establish the reference attribute, (3) estimate the importance of other attributes with respect to the reference attribute, and (4) calculate weights.

SMART Step 1: Rank Order Attributes

Consider a finite set of attributes or criteria deemed relevant by an individual or group of experts to a particular decision problem. This first step asks experts to agree on a rank ordering of these attributes according to their relative contribution to the expert’s overall preference within an additive utility (or value) function framework. Ranking can be either from most to least important or from least to most important. A number of approaches exist to assist in holistic ranking, the most popular and well known being pairwise ranking [74].
For example, consider our automobile purchase problem. The output from Elicitation Step 1 would be a rank ordering of the five relevant criteria from least to most important as shown in Table 3.

SMART Step 2: Establish the Reference Attribute

In this second step, experts select a common reference attribute, assign it a fixed score, and estimate the extent to which the remaining attributes are more or less important than the reference attribute. Any attribute can assume the role of reference attribute. For SMART, however, it is common to assign this role to the least important attribute and assign a reference score of 10 points.

SMART Step 3: Score Attributes Relative to the Reference Attribute

Given a fixed reference attribute (i.e., the least important attribute), experts are asked how much more important the remaining attributes are with respect to the reference attribute. For example, if the least important attribute is used as the reference point with a reference score of 50 points, experts would be asked to judge how many points should be allocated to each remaining attribute with respect to this reference attribute in a relative sense (e.g., 50 more points) or absolute sense (e.g., 100 points). It is common to systematically evaluate the remaining attributes in order of increasing importance to ensure individual or group consistency between the results from this step and the ordinal rankings from step 1; however, it may be worthwhile to randomize the order in which attributes are assessed as a means for uncovering any inconsistencies in preference.
Consider the car buying example discussed in Step 1. Using the least important attribute (i.e., Gas Mileage) as the reference attribute with a reference score of 50 points, point scores could be assigned to the remaining attributes as shown in Table 4.

Step 4: Calculate Weights

Using the point scores assigned to each of the attributes in Step 3, the final step of the SMART process is to calculate attribute weights. This is done by normalizing each attribute score against the total score among all attributes as shown in Equation (3). In the car buying example, the total points distributed among all five preference attributes are 50 + 100 + 150 + 300 + 400 = 1000 points. The corresponding weights for each attribute are calculated as shown in Table 5. Note that this method can generate precisely the same weights as the DAT method (assuming the correct points are used in both).

Strengths of This Approach

SMART does not need to be repeated if old attributes are removed or new attributes are added, unless the one being removed is also the one that is least important or the one being added assumes the role of being the least important. The number of questions needed to assign weights using the SMART technique is equal to one less than the number of preference attributes. Thus, the effort required to obtain attribute weights scales linearly with the number of attributes.

Limitations of This Approach

The choice of the score for the lowest- (or highest-) weighted attribute may affect the resulting attribute weights if the scores for other attributes are not chosen based on relative comparisons. For example, if the least important attribute is given a value of 10 and some other attribute is given a value of 30, this latter value should increase to 60 if the baseline score given to the least important attribute is raised to 20.

3.1.3. Swing Weighting Techniques (SWING)

The Swing Weighting Technique [72] is an approach for determining weighting factors indirectly through systematic comparison of attributes against the one deemed to be the most important. SWING consists of two general activities: (1) rank order attributes according to the relative importance of incremental changes in attribute values considering the full range of possibilities; (2) select either the least or most important attribute as a reference point and assess how much more or less important the other attributes are with respect to the reference point. This step involves the calculation of attribute weights as the ratio of points assigned to an attribute to the total points assigned to all attributes.
The SWING Weighting technique is comprised of the following four steps: (1) rank order attributes, (2) establish the reference attribute, (3) estimate importance of other attributes with respect to the reference attribute, and (4) calculate weights.

SWING Step 1: Rank Order Attributes

Just as was the case with the SMART method, this method begins by rank ordering relevant attributes. For illustration purposes, we will use the same rank ordering as before (shown in Table 3).

SWING Step 2: Establish the Reference Attribute

In this second step, experts select a common reference attribute, assign it a fixed score, and estimate the extent to which the remaining attributes are more or less important than the reference attribute. Any attribute can assume the role of reference attribute. For the Swing Weighting method, however, this role is assigned to the most important attribute with a reference score of 100 points.

SWING Step 3: Score Attributes Relative to the Reference Attribute

Given a fixed reference attribute, experts are asked to estimate how much less important the remaining attributes are with respect to the reference attribute. For example, if the most important attribute is used as the reference point with a reference score of 100 points, experts would be asked to judge how many points should be allocated to each remaining attribute with respect to this reference attribute in a relative sense (e.g., 10 less points) or an absolute sense (e.g., 90 points). It is common to systematically evaluate the remaining attributes in order of increasing or decreasing importance to ensure individual or group consistency between the results from this step and ordinal rankings from step 1; however, it may be worthwhile to randomize the order in which attributes are assessed as a means for uncovering any inconsistencies in preference.
Consider the car buying example discussed in Step 1. Using the most important attribute (purchase price) as the reference attribute, with a reference score of 100 points, point scores could be assigned to the remaining attributes as shown in Table 6.

SWING Step 4: Calculate Weights

Using the point scores assigned to each of the attributes in Elicitation Step 2, the final step of the SWING process is to calculate attribute weights. This is done by normalizing each attribute score against the total score among all attributes as shown in Equation (3). In the car buying example, the total points distributed among all five preference attributes are 12.5 + 25 + 37.5 + 75 + 100 = 250 points. The corresponding weights for each attribute can then be readily calculated as shown in Table 7. Note that the same weights are produced in this case as when using the previous two methods.

Strengths of This Approach

SWING considers the utility over the full range of attributes. SWING need not be repeated if old attributes are removed or new attributes are added unless the one being removed is also the one that is most important, or the one being added assumes the role of being the most important.
The number of questions needed to assign weights using the SWING technique is equal to one less than the number of preference attributes. Thus, the effort required to obtain attribute weights scales linearly with the number of attributes.

Limitations of This Approach

In contrast to the SMART technique, SWING weighting assigns a score with respect to a fixed upper score assigned to the most important attribute. While scores can be specified using any non-negative number up to the reference point, in practice, the presentation of the method often restricts users to specifying scores in terms of integer values. Consequently, users are limited to only 101 possible scores for each attribute, while, for SMART, the number of possible scores is infinite (e.g., 10 or higher). This means that SMART offers a greater diversity in weighting factor combinations than SWING.
The choice of the score for the most important attribute may affect the resulting attribute weights if the scores for other attributes are not chosen based on relative comparisons. For example, if the most important attribute is assigned a 100 and some other attribute is given a 50, this latter value should decrease to 40 if the baseline score given to the least important attribute is lowered to 80.

3.1.4. Simple Pairwise Comparison

The simple pairwise comparison technique for eliciting weights systematically considers all pairs of attributes in terms of which is more important. For each pairwise comparison, a point is assigned to the attribute that is considered more important. In the end, attribute weights are determined as the ratio of points assigned to each attribute divided by the total number of points distributed across all attributes.
The simple pairwise comparison technique is comprised of the following two steps: (1) pairwise rank the attributes, and (2) calculate weights.

Pairwise Step 1: Pairwise Rank the Attributes

Given a set of N attributes, systematically compare pairs of attributes in terms of which one of the two is more important relative to small changes over its range. Of the pair, the one judged to be more important is assigned a point. The process is repeated until all N * (N − 1)/2 pairs are evaluated. For example, consider the automobile purchase problem. Using the five criteria, there are 5 * (5 − 1)/2 = 10 pairs to evaluate. The output from this pairwise ranking step might yield the following results:
  • Purchase Price vs. Attractiveness: Purchase Price Wins;
  • Purchase Price vs. Reliability: Purchase Price Wins;
  • Purchase Price vs. Gas Mileage: Purchase Price Wins;
  • Purchase Price vs. Safety Rating: Purchase Price Wins;
  • Attractiveness vs. Reliability: Reliability Wins;
  • Attractiveness vs. Gas Mileage: Attractiveness Wins;
  • Attractiveness vs. Safety Rating: Safety Wins;
  • Reliability vs. Gas Mileage: Reliability Wins;
  • Reliability vs. Safety Rating: Reliability Wins;
  • Gas Mileage vs. Safety Rating: Safety Wins;
The point distribution obtained using these 10 comparisons is shown in Table 8.
Note that the least important attribute in the above example has a score of zero points (as it won none of the pairwise comparisons). The resulting weight factor in this case will be zero unless some constant offset or systematic bias is applied to all scores. Such an offset or bias desensitizes the resulting weights of the attributes to changes in the points distributed to each via a pairwise ranking procedure—the greater the offset, the less sensitive the resulting weighting distribution will be to small changes in attribute scores. For example, if an offset of 2 points or 10 points is used, the revised score distributions shown in Table 9 would result.

Pairwise Step 2: Calculate Weights

Using the point scores assigned to each of the attributes in the previous step, the second step of the simple pairwise comparison technique is to calculate attribute weights. This is done by normalizing each attribute score against the total score among all attributes using Equation (3). In our car buying example, the total points distributed among all five preference attributes is N * (N − 1)/2, or 5 * (5 − 1)/2 = 10 points. The corresponding weights for each attribute can then be readily calculated as shown in Table 10. It should be noted that this method generates unique weights as compared with the previous three methods. This is due to the fact that the number of potential weights is more discrete (it is limited by the number of comparisons that are made) as compared to the previous methods.
To demonstrate the impact of imposing an offset or systematic bias to the attribute scores, the weights obtained from adding 2 points and 10 points to each are shown in Table 11.
As the size of the offset or bias increases, the weights become equally distributed across attributes. In the case of an infinite offset, the approach results mirror the equal weighting technique.

Strengths of This Approach

This approach is very easy to complete since it requires users to judge which of two options is preferred. Such comparisons are much easier for humans to perform than ranking a complete list of attributes or assigning scores to each attribute. This approach also facilitates documentation of the reasoning supporting the resulting weight factors. It breaks down the questioning process to simple comparisons of two attributes that only requires evidence to support which attribute is more or less important than the other. The elicitation method requires that the user consider attribute ranges when making pairwise judgments. Weights can be readily recalculated with the addition of new attributes simply by incorporating all additional pairwise comparisons.

Limitations of This Approach

This approach does not employ any checks of internal consistency (i.e., for transitivity). It is up to the user to check to see whether the results make sense and are consistent. For instance, if A > B (i.e., A is preferred to B), and B > C, then logically A > C; however, there is nothing in the process that ensures that intransitive assessments must be made.
A Special Case of Pairwise Comparison: The Analytic Hierarchy Process
The Analytic Hierarchy Process (AHP) is a common process used to elicit decision maker priorities using a series of pairwise comparisons [74]. Its use has increased in popularity [75] because of the ease of explanation, ease of facilitating a group in the process, and availability of user-friendly software to implement the process. However, it is generally not looked upon favorably within the decision analysis community because of several drawbacks. Velasquez and Hester [76] identify problems due to interdependence between criteria and alternatives, the potential for inconsistency between judgment and ranking criteria, and the possibility of rank reversal [77] as disadvantages of the method. It is worth noting that, with every weighting approach, there will be drawbacks.

3.2. Approximate Techniques

Approximate Techniques establish weights based primarily on the ordinal rankings of attributes based on relative importance. Approximate techniques adopt the perspective that the actual attribute weights are samples from a population of possible weights, and the distribution of weight may be thought of as a random error component to a true weight [78]. As a result, approximate techniques seek the expected value of attribute weights and use these expected weights in utility models. A variety of approximate techniques exist, including: (1) equal weighting, (2) rank ordered centroid technique, (3) rank summed weighting technique, and (4) rank reciprocal technique.

3.2.1. Equal Weighting Technique

The Equal Weighting Technique assumes that no information is known about the relative importance of preference attributes or that the information pertinent to discriminating among attributes based on preference is unreliable. Under these conditions, one can adopt maximum entropy arguments and assume that the distribution of true weights follows a uniform distribution [79].
Given a set of N preference attributes, the Equal Weighting Technique assigns a weight wi to each attribute as shown in Equation (4).
wi = 1/N
For example, consider our automobile purchase decision. Assuming no additional information is available to establish a preference ordering of the five problem attributes, an equal weight of 1/5 (0.20) is assigned to each.

Strengths of This Approach

The Equal Weighting Technique is the simplest of all weighting techniques, which includes both ratio assignment techniques and approximate techniques. The only prerequisite for applying the Equal Weighting Technique is a judgment that an attribute matters or is significant [78]. The Equal Weighting Technique is a formal name for what is naturally done in the early stages of analysis.

Limitations of This Approach

The weights resulting from application of the Equal Weighting Technique may produce inaccurate rankings if the true weights associated with one or more criteria dominate the others. As with any technique based on mathematical principles, the weights obtained via the Equal Weighting Technique are only as good as its assumptions. The principle underlying the Equal Weighting Technique is the use of the uniform distribution constructed across all attributes. Alternative techniques should be used if this assumption is not applicable, or if more information exists that could assist in establishing a quantitative difference between attributes.
When some information is available to help distinguish between attributes on the basis of importance, alternative techniques will produce better estimates of attribute weights. When the number of attributes is 10 or less, it is more useful to spend resources to first establish a rank ordering of the attributes using group discussion or pairwise ranking and then follow-up with an alternative approximate techniques.

3.2.2. Rank Ordered Centroid (ROC) Technique

The Rank Ordered Centroid Technique assumes knowledge of the ordinal ranking of preference attributes with no other supporting quantitative information on how much more important one attribute is relative to the others [80]. As a consequence of this assumption, it is assumed that the weights are uniformly distributed on the simplex of rank ordered weights [78].
The Rank Ordered Centroid Technique is comprised of the following two steps: (1) rank order attributes and establish rank indices, and (2) calculate the rank ordered centroid for each attribute.

ROC Step 1: Rank Order Attributes and Establish Rank Indices

Consider a finite set of N attributes or criteria deemed relevant by an individual or group of experts to a particular decision problem. This first step asks users to agree on a rank ordering of these attributes according to their relative contribution to the expert’s overall preference within an additive utility (or value) function framework. A number of approaches exist to assist in holistic ranking, the most popular being pairwise ranking [81]. The resultant ranking is from most important to least important, where the index i = 1 is assigned to the most important attribute, and the index i = N is assigned to the least important attribute.
For example, consider the typical choice problem centered on which automobile to purchase. The output from this step would be a rank ordering of these preference attributes from most to least important, as shown in Table 12.

ROC Step 2: Calculate the Rank Ordered Centroid for Each Attribute

The Rank Ordered Centroid Technique assigns to each of N rank ordered attributes a weight wi according to Equation (5).
w i = 1 / N   k = i N 1 K
where, again, the attributes are ordered from most important (i = 1) to least important (i = N).
In our car buying example, the weights assigned to each attribute can be calculated as shown in Table 13. Note that the predefined formula used in the approximate techniques limits the number of weights available for assignment. Thus, while we can see the ordinality of weight preferences remains, the magnitude of weights and distance between them has changed. This is the tradeoff that a decision maker must make; is more control over weights preferred or not? If so, using a ratio assignment technique provides more control. If time is more crucial or if decision makers are not as informed regarding the problem, an approximate technique may prove more appropriate.

Strengths of This Approach

The Rank Ordered Centroid technique provides a means for coming up with meaningful weights based solely on ordinal rankings of attributes based on importance. This is particularly helpful since, in situations consisting of many users with diverse opinions, rank orderings of attributes may be the only aspect of preference that can achieve consensus agreement. Calculating weights using the Rank Ordered Centroid technique can be easily implemented using standard spreadsheet tools or calculated using a calculator.

Limitations of This Approach

When some information is available to help distinguish between attributes on the basis of importance, alternative techniques will produce better estimates of attribute weights. When the number of attributes is 10 or less, it is more useful to spend resources to first establish a rank ordering of the attributes using group discussion or pairwise ranking, and then follow-up with an alternative approximate techniques. As with any technique based on mathematical principles, the weights obtained via the Rank Ordered Centroid technique are only as good as its assumptions. The principle underlying the ROC technique is the use of the uniform distribution (justified by Laplace’s principle of insufficient reason) across the range of possible weights that can be assumed by an attribute based on its importance rank. Alternative techniques should be used if this assumption is not applicable, or if more information exists that could assist in establishing a quantitative difference between attributes.

3.2.3. Rank Summed Weighting (RS) Technique

To approximate attribute weights, the Rank Summed Weighting technique uses information on the rank order of attributes on the basis of importance combined with the weighting of each attribute in relation to its rank order [61].
The Rank Summed Weighting technique is comprised of the following two steps: (1) rank order attributes and establish rank indices, and (2) calculate the rank summed weight for each attribute.

RS Step 1: Rank Order Attributes and Establish Rank Indices

Consider a finite set of N attributes or criteria deemed relevant by an individual or group of users to a particular decision problem. This first step asks users to agree on a rank ordering of these attributes according to their relative contribution to the expert’s overall preference within an additive utility (or value) function framework. A number of approaches exist to assist in holistic ranking, the most popular being pairwise ranking [81]. The resultant ranking is from most important to least important, where the index i =1 is assigned to the most important attribute and the index i = N is assigned to the least important attribute. For our car purchase example, we maintain the same ordinal ranking as shown in Table 12 for the previous method.

RS Step 2: Calculate the Rank Summed Weight for Each Attribute

The Rank Summed Weighting technique assigns, to each of N rank ordered attributes, a weight wi according to Equation (6).
w i = N i + 1 / k = 1 N N i + 1 = 2 N i + 1 / N N + 1
with the attributes ordered from most important (i = 1) to least important (i = N).
The rank exponent weighting technique is a generalization of the rank sum weighting technique, as shown in Equation (7).
w i = ( N     i + 1 ) p / ( k = 1 N N i + 1 ) p
In this case, a p of 0 results in equal weights, p = 1 is the rank sum, and increasing p values further disperses the weight distribution among attributes.
In the car buying example above, the weights assigned to each attribute can be calculated using the Rank Summed Weighting technique as shown in Table 14. Once again, ordinality of criteria preference remains when compared with previous methods; however, the spread of weights changes due to the predetermined rank summed formula.

Strengths of This Approach

The Rank Summed Weighting technique provides a means for coming up with meaningful weights based solely on ordinal rankings of attributes based on importance. This is particularly helpful since, in situations consisting of many users with diverse opinions, rank orderings of attributes may be the only aspect of preference that can achieve consensus agreement. Calculating weights using the Rank Summed Weighting technique can be easily implemented using standard spreadsheet tools or calculated using a calculator.

Limitations of This Approach

When some information is available to help distinguish between attributes on the basis of importance, alternative techniques will produce better estimates of attribute weights. When the number of attributes is 10 or less, it is more useful to spend resources to first establish a rank ordering of the attributes using group discussion or pairwise ranking, and then follow-up with alternative approximate techniques.
As with any technique based on mathematical principles, the weights obtained via the Rank Summed Weighting technique are only as good as their assumptions. The principle underlying the RS technique is the weighting of each attribute in proportion to its rank order in terms of importance. Alternative techniques should be used if this assumption is not applicable or unreasonable, or if more information exists that could assist in establishing a quantitative difference between attributes.

3.2.4. Rank Reciprocal Weighting (RR) Technique

The rank reciprocal method is similar to the ROC and RS methods. It involves use of the reciprocal of ranks, divided by the sum of the reciprocals [61].
The Rank Summed Weighting technique is comprised of the following two steps: (1) rank order attributes and establish rank indices, and (2) calculate the rank reciprocal weight for each attribute.

RR Step 1: Rank Order Attributes and Establish Rank Indices

Consider a finite set of N attributes or criteria deemed relevant by an individual or group of users to a particular decision problem. This first step asks users to agree on a rank ordering of these attributes according to their relative contribution to the expert’s overall preference within an additive utility (or value) function framework. A number of approaches exist to assist in holistic ranking, the most popular being pairwise ranking [81]. The resultant ranking is from most important to least important, where the index i = 1 is assigned to the most important attribute, and the index i = N is assigned to the least important attribute. For our car purchase example, we maintain the same ordinal ranking as shown in Table 12 for the previous method.

RR Step 2: Calculate the Rank Summed Weight for Each Attribute

The Rank Reciprocal Weighting technique assigns to each of the N rank ordered attributes a weight wi according to Equation (8).
w i = 1 / i / ( k = 1 N 1 / k )
where, again, the attributes are ordered from most important (i = 1) to least important (i = N). In the car buying example above, the weights assigned to each attribute can be calculated using the RR technique as shown in Table 15. Again, ordinality of criteria preference remains when compared with previous methods; however, the spread of weights changes due to the predetermined rank reciprocal formula.

Strengths of This Approach

Similarly to the rank ordered centroid and rank summed weighting approaches, the rank reciprocal technique provides a mechanism for calculating weights using only an ordinal ranking of relevant attributes. This technique is easily implemented using spreadsheet tools or calculated using a calculator.

Limitations of This Approach

As with the previous two methods, rank reciprocal weighting is best used when only an ordering of attributes is possible. When more specific weighting is possible, use of a ratio assignment technique is advised.

4. Discussion

Eight major techniques spanning the past several decades for computing weights in MCDA environments have been discussed. In a comparison of the categories of ratio assignment and approximate techniques, Jia, Fischer [78] found that the selection accuracy of quantitatively stated ratio weights was as good as or better than that of the best approximate methods under all conditions studied (except when the assessed weights are purely random). Because linear decision models are quite robust with respect to change of weights [40], using approximate weights yields satisfactory quality under a wide variety of circumstances. Despite the robustness of linear models, even noisy information about the ranking of attributes improves decisions substantially. When response error is present, decision quality decreases as the number of attributes or the number of alternatives rated against these attributes increases.

4.1. Characteristics of Multi-Criteria Decision Analysis Techniques

Knowing multiple ways to represent weightings with respect to making decisions provides numerous benefits for model building. It can help in evaluating, identifying, and selecting the best decisions for a given situation, whether this is provided as the primary model output or occurs frequently throughout execution as a component of the model’s behaviors. Identifying and understanding different mechanisms for assigning weights helps to convey the complexities that can arise in modeling decision processes. This can be paired with verification and validation activities to provide a transparent connection between model design and simulation outcomes to aid in traceability and reproducibility [82,83]. Understanding the uses of the individual techniques can aid in the use of techniques based on the characteristics of the decision making within a modeled system.
Models can incorporate a combination of ratio assignment and approximate techniques and select the most appropriate method based on a given decision (refer to fourth column of Table 16 and Table 17). Determining which criteria are important can help flush out a model’s conceptualization and serve as supporting documentation for how and why certain variables are included in the model design. The observed advantages, disadvantages, and potential uses of each technique are summarized in Table 16 for ratio assignment techniques and in Table 17 for approximate techniques. How these techniques can inform the development of computational models is explored in Section 4.2.

4.2. MCDA as Decision-Making Options for Computational Models

Capturing and representing decision making processes is a common facet when constructing simulation models. Decision making can exist at many levels within a model, such as representing how an individual decides when to purchase a car, assisting a store manager in developing a personnel schedule for improved cost management, or for examining investment decisions. Simulations allow for observations on the performance of modeled behaviors to be conducted and analyzed [84] so that the modelers or decision makers can gain insight into whether the selected decision making processes led to the expected outcomes and to help them in making decisions based on these results. However, models that incorporate human decisions may produce unsuspected chaos as a result of a minority of the decision makers [85] and it can be challenging to identify what to capture and how to incorporate it within a model.
The applicability of the ratio assignment and approximate techniques differ based on the context of the problem being addressed, the decisions being made, the decision makers being modeled, and the criterion that have been deemed necessary for a given decision. We provide an overview of the components within ABM, SD, and DES that are relevant to implementing these techniques within a simulation model. Table 18 provides a comparison of ratio assignment techniques and Table 19 provides a comparison of approximate techniques. These tables are intended to provide guidance and initial steps towards incorporating MCDA techniques. These are not intended to be exhaustive comparisons.
Determining the appropriate MCDA technique to select is highly dependent upon the given system context, the outcomes being examined, any performance metrics being assessed, the level of aggregation desired, and many other potential criteria. The criteria that are involved in the weighting combinations serve as candidates that should be involved in the verification and validation stages of model development and testing. For verification, the implementation of the criteria should be traceable back to the MCDA technique identified in the model design. This should be checked for consistency against the subject matter experts’ specifications or any other conceptual model documentation. For validation, the selected MCDA categorization as well as the specific technique and its method for distributing weights, should be checked against what is known about the system. The determination of whether the decision process from the real system is more accurately represented as a ratio assignment or as an approximation should be defendable based on the data known about the system. This will reinforce the credibility of the technique selection and the model construction.
Many simulation platforms natively allow for some instance of an equal or percent-based choice to be implemented within the model, whether these choice options exist at the system level in the form of flows [86], at the process level in the form of path logic [26,87], or at the individual level to capture the decisions of individual agents [43]. However, the implementation of decisions that are based on the rankings, weightings, or comparisons between multiple attributes is not generally as straightforward of a task. Table 18 and Table 19 are intended to inform the model builder of circumstances under which the reviewed approximate and ratio assignment techniques may be of use. Simulation and domain expertise are still required to properly implement and test the technique. The application of a MCDA technique within a simulation should fill a necessary gap, maintain traceability to the model’s requirements, and not introduce new gaps or unnecessary challenges to the simulation [82,88,89,90].
How criteria weightings are conceptualized and how they are implemented in practice can vary greatly across modeling paradigms. For instance, consider the equal weighting method from the approximate techniques. While this may be a conceptually simple technique to conceptualize weighting assignments for a given set of criteria, the considerations for which criterion are important, how the criterion are interconnected, and the potential results that can come out of the decision may be very different. This can result from differences in desired levels of scale and aggregation, continuous or discrete representation of components, and the desired time advance granularity [88,89,91,92,93,94]. Section 4.2.1, Section 4.2.2 and Section 4.2.3 discuss the potential applicability of utilizing ratio assignment and approximate techniques within the ABM, DES, and SD modeling paradigms.

4.2.1. MCDA Applicability for Agent-Based Models

As weighting methods are based on what is deemed to be important by the decision maker, MCDA can provide several unique benefits to ABM. Agent populations are commonly heterogeneous, spatially separate from their environment, dynamic, and behave based on agent–agent and agent–environment rules [28,29,84]. Diverse methods, algorithms, and selection criteria represent decision making opportunities in ways that are more representative of the system being modeled [43,95]. Simulated agents can perform large volumes of decisions throughout their lifetimes, and they are constantly seeking to meet their goals, follow through on behaviors, and progress through life states. Based on the results of their cumulative interactions, their current and past experiences, and by accomplishing and/or redefining their objectives over time, such as through achieving goals and forming new goals, agents’ weighting criteria may change as well. For instance, an agent faced with a decision about allocating his or her funds may not have a clear ranking of importance while happy and sufficiently wealthy; however, that agent may have a well-established importance ordering while unhappy and lacking wealth [29]. The agents’ internal logics can change the weighting criteria method being utilized to better reflect their current states over time to achieve a more realistic representation of the system.
Different specifications of agent behaviors can lead to similar outcomes, and agent-based models can assist in identifying which agent behaviors provide the most simple explanation of the system behavior [28]. The agent-based modeling paradigm provides the ability to observe system level behaviors that result from each individual agent making individualized decisions based on local knowledge and personal perspective. Decision categorizations can include combinations of emotional, cognitive, and social factors [96]; personality traits in the face of life threatening environmental stimuli [97]; communality and affinity for selecting group formations [98]. Recognition-primed decision making has been employed to represent the decision process of a senior military commander to reflect the variability of humans in making decisions within an operational military environment where problems are commonly complex and complete information is not often known [41]. Knoeri, Nikolic [99] construct a model using awareness and incentives to enhance decision processes for recycling materials to explore the effects on construction wastes. Balke and Gilbert (43) provide a comprehensive and comparative review of 14 architectures for agent decision making that focuses on the architectures’ cognitive, affective, social, learning, and norm consideration features.
Incorporating MCDA techniques into decision processes can benefit the ways that social norms are represented within a population, increase fidelity based on the environmental weightings that pertain to individual decisions, and represent geographical factors as weighting mechanics within the context of personal decision factors. Norms represent the certain ways that people act within a society and how they are punished when they act differently [100]. Norms representations and their effects on the population vary from modeling cooperation among unrelated individuals [101], to anxiety between group affiliations [34], to environmental and social stressors [97]. The level of agreement among the model’s builders and stakeholders, the availability of supporting empirical evidence, and the number of relevant decision attributes should be considered when evaluating applicable MCDA techniques. Due to the potentially large number of agents and decisions, the computational complexity of the weighting technique and how often weightings are recalculated should also be factored into the technique selection.

4.2.2. MCDA Applicability for Discrete Event Simulation Models

Discrete Event Simulation models generally represent decisions from an aggregate level where mechanisms for defining entity movement or routing are specified. The entities moving through the system have no control over the decision itself; instead, progression-based decisions are made for the entities at the system level using percentages or logical determinations, such as entity type, percent chance, or shortest queue lengths [24,25,26,102,103]. Starting at the conceptual model building phase, focusing on DES decision elements aids in the identification of relevant criterion, fuels learning and collaboration, and contributes to assessing model validation [25,104,105]. While the static structure of DES simulations generally dictates the paths that one can follow, many simulation platforms allow for the incorporation of logic within entities to allow greater depth in path selection [26,106].
MCDA can be utilized to determine the weights of the transition options coming out of decision nodes, for representing path-selection logic within entities, and for determining or altering resource schedules. Decision attributes that have clear separations of importance may be better represented using ratio assignment techniques. Attributes with assumed equal weightings or weightings based on sampling from the uniform distribution may be more suitable when using approximate techniques. The number of attributes present in the relevant decision-making processes and the number of entity types, as well as the level of agreement within the simulation’s conceptual model, its simulation building team, or its stakeholders, should be evaluated within the context of the problem being modeled to select the appropriate technique within the corresponding MCDA technique category. Within the domain of healthcare, MCDA techniques can provide alternative means for modeling staff scheduling, patient admission, patient routing, and resource allocation.
A survey of simulation application priorities emphasizes the relevance of human performance modeling, modeling complex behavior, and human decision-making towards the health care and service industries [107]. Within the scope of quantitative methods, DES models can incorporate resources and constraints, include soft variables from surveys and expert opinions, and cope with the high levels of variabilities existing between and within variables [108]. Data such as patient arrival times, discharge times, bed types, and time to bed within an emergency department are common variables when examining system performance and exploring improvements [24]. This type of criteria could be utilized for constructing ratio assignments or approximate techniques within a simulation to drive simulated decisions.

4.2.3. MCDA Applicability for System Dynamics Models

Decision-making in SD models is generally represented through the flows that connect stocks and are implemented in the forms of ordinary differential or partial differential equations [22]. As such, a decision criterion is represented within an equation in the form of a variable, with its coefficient representing the weighting. These coefficients can be constants established at initialization or change dynamically throughout execution. In SD platforms, these are commonly represented as stocks and auxiliary variables. MCDA techniques can be incorporated to handle situations where dynamic weightings are needed based on aggregate states of the SD simulation, different possible interpretations of auxiliary variables, or as a result of structural changes to the simulation.
In SD models, decision environments are represented through the dynamic behavior of the system based on what is known about the state of the system using its variables and inventory levels [22]. These models have been used to identify, evaluate, and assist in making economic decisions for a variety of systems, such as for enhanced oil recovery operations [109] for industrial production and distribution systems [110], and for inventory logistics within healthcare systems [111]. The aggregate decision-making representations of these models can result in chaos due to the human decision-making behaviors of a significant minority within the model [85].
MCDA has been utilized within SD models of health preparedness for pandemic influenza to evaluate mitigation strategies based on epidemiological parameters and policy makers’ prioritizations [112]. The integration of MCDA with SD has been successful in representing multiple goals, objectives, and perspectives for community-based forest management [113]. A review of sustainable supply chain management identifies that the integration of SD with MCDA can help to address the identified scientific rigor shortcoming of neglected model validation and the disclosure of model equations [114]. Selecting suitable MCDA techniques based on Table 16 and Table 17 requires considering the quantity of variables, any consensus among stakeholders, and the availability of empirical data to inform validation.

4.3. Limitations

This article does not consider strategies for assessing weight factors in other choice frameworks (e.g., ordered–weighted averaging or multiplicative utility models), nor does it consider techniques for obtaining the coefficients of linear models, proper or improper, in general. This research also does not focus on defining value functions for use in Equation (1), but, rather, our focus is on how a decision maker should best determine the appropriate weights for different criteria in a MCDA problem.

5. Conclusions

When faced with many attributes, it is often more convenient to use approximate techniques for assigning attribute weights, in the absence of more information. One can also use approximate techniques for an initial weighting and further refine using a ratio assignment technique. Whenever possible, rationale should accompany any judgments leading to attribute weights. Rationale includes documenting the information and reasoning that support each individual judgment, even if it is based strictly on intuition. Providing justification increases model transparency and exposes the model to critical review.
Further, when possible, it is useful to apply more than one technique for eliciting weights of preference attributes. If the results following the honest application of two or more techniques are the same, the credibility of the corresponding utility model is increased. In contrast, if the results are not the same, the disagreement provides a basis for settling differences in opinion, discussing model limitations and assumptions, and diagnosing hidden biases.
The use of MCDA allows for the creation of more realistic or granular representations of decision-making processes for computational models. We have provided a classification of ratio assignment and approximate techniques for conducting MCDA along with an evaluation of the strengths and weaknesses of each technique. The characteristics supporting the suitability of a given ratio assignment or approximate technique under a given context and modeling paradigm are discussed. Model building considerations that should be accounted for in applying MCDA techniques within computational models in practice are presented for ABM, DES, and SD modeling paradigms.
Future work is needed to evaluate other categories of MCDA techniques and how they support model conceptualization, implementation, verification, validation, and analysis. Incorporating MCDA techniques into model decision processes aids in traceability between the developed simulation and the modeled systems. This can aid verification and validation practices in determining the correctness of the implemented simulation and accuracy of the representation of the real system. Additional research is needed into the connections between documenting MCDA development within a simulation and effective means for utilizing it to aid conceptualization, verification, and validation.
The ability to determine the criteria that should be involved in a decision, how to potentially approach weighting the criteria, and how to validate the weightings are dependent upon system knowledge, stakeholder knowledge, and empirical evidence. To this end, social media platform usage continues to increase the volume of easily accessible personal information being directly posted about peoples’ daily activities, key events, and their likes and dislikes. As a result, there are growing possibilities for connecting simulations directly into the “human” component of data by utilizing these sources of real information for deriving decision-making criteria. Kavak, Vernon-Bido [115] explore the use of social media data in simulations as sources of input data, for calibration, for recognizing mobility patterns, and for identifying communication patterns. Padilla, Kavak [116] use tweets to identify individual-level tourist visit patterns and sentiment. Recent advances explore the characteristics comprising sentiment-based scores utilizing posted information on twitter [117] and through YouTube videos [118]. These information sources can provide new avenues towards identifying decision criteria and desired outcomes, and in developing individual-level and population-based behaviors and rules which can further fuel the use of MCDA within existing modeling paradigms.
Ultimately, there is no one universal “right” way to conduct weighting for a MCDA problem. As discussed earlier, ordinality is preserved when using any of the techniques correctly. However, more coarse weights can be determined using approximate techniques and more refined weights are possible using ratio techniques. Which method is appropriate depends on the problem context. This article benefits practitioners by providing a comprehensive review and comparison of common weighting methods that can help to guide the selection of weighting methods to better address the questions being asked of a modeled system.

Author Contributions

Conceptualization, B.E., C.J.L. and P.T.H.; methodology, B.E., C.J.L. and P.T.H.; writing—original draft preparation, B.E., C.J.L. and P.T.H.; writing—review and editing, B.E. and P.T.H. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Ören, T. Simulation and Reality: The Big Picture. Int. J. Model. Simul. Sci. Comput. 2010, 1, 1–25. [Google Scholar] [CrossRef]
  2. Zeigler, B.P.; Prähofer, H.; Kim, T.G. Theory of Modeling and Simulation: Integrating Discrete Event and Continuous Complex Dynamic Systems, 2nd ed.; Academic Press: New York, NY, USA, 2000. [Google Scholar]
  3. Sargent, R.G. Verification and Validation of Simulation Models. J. Simul. 2013, 7, 12–24. [Google Scholar] [CrossRef] [Green Version]
  4. Zeigler, B.P.; Luh, C.; Kim, T. Model Base Management for Multifacetted Systems. Trans. Model. Comput. Simul. 1991, 1, 195–218. [Google Scholar]
  5. Yilmaz, L. On the Need for Contextualized Introspective Models to Improve Reuse and Composability of Defense Simulations. J. Def. Model. Simul. 2004, 1, 141–151. [Google Scholar] [CrossRef]
  6. Spiegel, M.; Reynolds, P.F.; Brogan, D.C. A Case Study of Model Context for Simulation Composability and Reusability. In Proceedings of the 2005 Winter Simulation Conference, Orlando, FL, USA, 4 December 2005; pp. 437–444. [Google Scholar]
  7. Casilimas, L.; Corrales, D.C.; Solarte Montoya, M.; Rahn, E.; Robin, M.-H.; Aubertot, J.-N.; Corrales, J.C. HMP-Coffee: A Hierarchical Multicriteria Model to Estimate the Profitability for Small Coffee Farming in Colombia. Appl. Sci. 2021, 11, 6880. [Google Scholar] [CrossRef]
  8. Lynch, C.J. A Multi-Paradigm Modeling Framework for Modeling and Simulating Problem Situations. Master’s Thesis, Old Dominion University, Norfolk, VA, USA, 2014. [Google Scholar]
  9. Vennix, J.A. Group Model-Building: Tackling Messy Problems. Syst. Dyn. Rev. 1999, 15, 379–401. [Google Scholar] [CrossRef]
  10. Fernández, E.; Rangel-Valdez, N.; Cruz-Reyes, L.; Gomez-Santillan, C. A New Approach to Group Multi-Objective Optimization under Imperfect Information and Its Application to Project Portfolio Optimization. Appl. Sci. 2021, 11, 4575. [Google Scholar] [CrossRef]
  11. Barry, P.; Koehler, M. Simulation in Context: Using Data Farming for Decision Support. In Proceedings of the 2004 Winter Simulation Conference, Washington, DC, USA, 5–8 December 2004. [Google Scholar]
  12. Keeney, R.L.; Raiffa, H.G. Decisions with Multiple Objectives: Preferences and Value Tradeoffs; Wiley & Sons: New York, NY, USA, 1976. [Google Scholar]
  13. Mendoza, G.A.; Martins, H. Multi-criteria decision analysis in natural resource management: A critical review of methods and new modelling paradigms. For. Ecol. Manag. 2006, 230, 1–22. [Google Scholar] [CrossRef]
  14. Aenishaenslin, C.; Gern, L.; Michel, P.; Ravel, A.; Hongoh, V.; Waaub, J.-P.; Milord, F.; Bélanger, D. Adaptation and evaluation of a multi-criteria decision analysis model for Lyme disease prevention. PLoS ONE 2015, 10, e0135171. [Google Scholar] [CrossRef]
  15. Hongoh, V.; Campagna, C.; Panic, M.; Samuel, O.; Gosselin, P.; Waaub, J.-P.; Ravel, A.; Samoura, K.; Michel, P. Assessing interventions to manage West Nile virus using multi-criteria decision analysis with risk scenarios. PLoS ONE 2016, 11, e0160651. [Google Scholar] [CrossRef]
  16. Scholten, L.; Maurer, M.; Lienert, J. Comparing multi-criteria decision analysis and integrated assessment to support long-term water supply planning. PLoS ONE 2017, 12, e0176663. [Google Scholar]
  17. Ezell, B.C. Infrastructure Vulnerability Assessment Model (I-VAM). Risk Anal. Int. J. 2007, 27, 571–583. [Google Scholar] [CrossRef]
  18. Collins, A.J.; Hester, P.; Ezell, B.; Horst, J. An Improvement Selection Methodology for Key Performance Indicators. Environ. Syst. Decis. 2016, 36, 196–208. [Google Scholar] [CrossRef]
  19. Ezell, B.; Lawsure, K. Homeland Security and Emergency Management Grant Allocation. J. Leadersh. Account. Ethics 2019, 16, 74–83. [Google Scholar]
  20. Caskey, S.; Ezell, B. Prioritizing Countries by Concern Regarding Access to Weapons of Mass Destruction Materials. J. Bioterror. Biodefense 2021, 12, 2. [Google Scholar]
  21. Sterman, J.D. Modeling managerial behavior: Misperceptions of feedback in a dynamic decision making experiment. Manag. Sci. 1989, 35, 321–339. [Google Scholar] [CrossRef] [Green Version]
  22. Forrester, J.W. Industrial Dynamics; The MIT Press: Cambridge, MA, USA, 1961. [Google Scholar]
  23. Robinson, S. Discrete-event simulation: From the pioneers to the present, what next? J. Oper. Res. Soc. 2005, 56, 619–629. [Google Scholar] [CrossRef] [Green Version]
  24. Hamrock, E.; Paige, K.; Parks, J.; Scheulen, J.; Levin, S. Discrete Event Simulation for Healthcare Organizations: A Tool for Decision Making. J. Healthc. Manag. 2013, 58, 110–124. [Google Scholar] [CrossRef]
  25. Padilla, J.J.; Lynch, C.J.; Kavak, H.; Diallo, S.Y.; Gore, R.; Barraco, A.; Jenkins, B. Using Simulation Games for Teaching and Learning Discrete-Event Simulation. In Proceedings of the 2016 Winter Simulation Conference, Arlington, VA, USA, 11–14 December 2016; pp. 3375–3385. [Google Scholar]
  26. Kelton, W.D.; Sadowski, R.P.; Swets, N.B. Simulation with Arena, 5th ed.; McGraw-Hill: New York, NY, USA, 2010. [Google Scholar]
  27. Epstein, J.M. Agent-Based Computational Models and Generative Social Science. Complexity 1999, 4, 41–60. [Google Scholar] [CrossRef]
  28. Gilbert, N. Using Agent-Based Models in Social Science Research. In Agent-Based Models; Sage: Los Angeles, CA, USA, 2008; pp. 30–46. [Google Scholar]
  29. Epstein, J.M.; Axtell, R. Growing Artificial Societies: Social Science from the Bottom Up; The MIT Press: Cambridge, MA, USA, 1996. [Google Scholar]
  30. Schelling, T.C. Dynamic Models of Segregation. J. Math. Sociol. 1971, 1, 143–186. [Google Scholar] [CrossRef]
  31. Smith, E.B.; Rand, W. Simulating Macro-Level Effects from Micro-Level Observations. Manag. Sci. 2018, 64, 5405–5421. [Google Scholar] [CrossRef]
  32. Wooldridge, M.; Jennings, N.R. (Eds.) Agent Theories, Architectures, and Languages: A Survey. In Intelligent Agents ATAL; Springer: Berlin/Heidelberg, Germany, 1994; pp. 1–39. [Google Scholar]
  33. Lynch, C.J.; Diallo, S.Y.; Tolk, A. Representing the Ballistic Missile Defense System using Agent-Based Modeling. In Proceedings of the 2013 Spring Simulation Multi-Conference-Military Modeling & Simulation Symposium, San Diego, CA, USA, 7–10 April 2013; Society for Computer Simulation International: Vista, CA, USA, 2013; pp. 1–8. [Google Scholar]
  34. Shults, F.L.; Gore, R.; Wildman, W.J.; Lynch, C.J.; Lane, J.E.; Toft, M. A Generative Model of the Mutual Escalation of Anxiety Between Religious Groups. J. Artif. Soc. Soc. Simul. 2018, 21, 1–25. [Google Scholar] [CrossRef]
  35. Wooldridge, M.; Fisher, M. (Eds.) A Decision Procedure for a Temporal Belief Logic. In Temporal Logic ICTL 1994; Springer: Berlin/Heidelberg, Germany, 1994; pp. 317–331. [Google Scholar]
  36. Sarker, I.H.; Colman, A.; Han, J.; Khan, A.I.; Abushark, Y.B.; Salah, K. BehavDT: A Behavioral Decision Tree Learning to Build User-Centric Context-Aware Predictive Model. Mob. Netw. Appl. 2020, 25, 1151–1161. [Google Scholar] [CrossRef] [Green Version]
  37. Ching, W.-K.; Huang, X.; Ng, M.K.; Siu, T.-K. Markov Chains: Models, Algorithms and Applications, 2nd ed.; Springer: New York, NY, USA, 2013. [Google Scholar] [CrossRef]
  38. Razzaq, M.; Ahmad, J. Petri Net and Probabilistic Model Checking Based Approach for the Modelling, Simulation and Verification of Internet Worm Propagation. PLoS ONE 2015, 10, e0145690. [Google Scholar] [CrossRef]
  39. Sokolowski, J.A.; Banks, C.M. Modeling and Simulation Fundamentals: Theoretical Underpinnings and Practical Domains; John Wiley & Sons: Hoboken, NJ, USA, 2010. [Google Scholar]
  40. Dawes, R.M.; Corrigan, B. Linear models in decision making. Psychol. Bull. 1974, 81, 95–106. [Google Scholar]
  41. Sokolowski, J.A. Enhanced decision modeling using multiagent system simulation. Simulation 2003, 79, 232–242. [Google Scholar]
  42. Maani, K.E.; Maharaj, V. Links between systems thinking and complex decision making. Syst. Dyn. Rev. J. Syst. Dyn. Soc. 2004, 20, 21–48. [Google Scholar] [CrossRef]
  43. Balke, T.; Gilbert, N. How do agents make decisions? A survey. J. Artif. Soc. Soc. Simul. 2014, 17, 1–30. [Google Scholar] [CrossRef]
  44. Jin, H.; Goodrum, P.M. Optimal Fall Protection System Selection Using a Fuzzy Multi-Criteria Decision-Making Approach for Construction Sites. Appl. Sci. 2021, 11, 5296. [Google Scholar] [CrossRef]
  45. Kim, B.-S.; Shah, B.; Al-Obediat, F.; Ullah, S.; Kim, K.H.; Kim, K.-I. An enhanced mobility and temperature aware routing protocol through multi-criteria decision making method in wireless body area networks. Appl. Sci. 2018, 8, 2245. [Google Scholar] [CrossRef] [Green Version]
  46. García, V.; Sánchez, J.S.; Marqués, A.I. Synergetic application of multi-criteria decision-making models to credit granting decision problems. Appl. Sci. 2019, 9, 5052. [Google Scholar] [CrossRef] [Green Version]
  47. Urbaniak, K.; Wątróbski, J.; Sałabun, W. Identification of Players Ranking in E-Sport. Appl. Sci. 2020, 10, 6768. [Google Scholar] [CrossRef]
  48. Panapakidis, I.P.; Christoforidis, G.C. Optimal selection of clustering algorithm via Multi-Criteria Decision Analysis (MCDA) for load profiling applications. Appl. Sci. 2018, 8, 237. [Google Scholar] [CrossRef] [Green Version]
  49. Shaikh, S.A.; Memon, M.; Kim, K.-S. A Multi-Criteria Decision-Making Approach for Ideal Business Location Identification. Appl. Sci. 2021, 11, 4983. [Google Scholar] [CrossRef]
  50. Clemente-Suárez, V.J.; Navarro-Jiménez, E.; Ruisoto, P.; Dalamitros, A.A.; Beltran-Velasco, A.I.; Hormeño-Holgado, A.; Laborde-Cárdenas, C.C.; Tornero-Aguilera, J.F. Performance of Fuzzy Multi-Criteria Decision Analysis of Emergency System in COVID-19 Pandemic. An Extensive Narrative Review. Int. J. Environ. Res. Public Health 2021, 18, 5208. [Google Scholar] [CrossRef]
  51. Liu, Y.; Zhang, H.; Wu, Y.; Dong, Y. Ranking Range Based Approach to MADM under Incomplete Context and its Application in Venture Investment Evaluation. Technol. Econ. Dev. Econ. 2019, 25, 877–899. [Google Scholar] [CrossRef]
  52. Xiao, J.; Wang, X.; Zhang, H. Exploring the Ordinal Classifications of Failure Modes in the Reliability Management: An Optimization-Based Consensus Model with Bounded Confidences. Group Decis. Negot. 2021, 1–32. [Google Scholar] [CrossRef]
  53. Zhang, H.; Zhao, S.; Kou, G.; Li, C.-C.; Dong, Y.; Herrera, F. An Overview on Feedback Mechanisms with Minimum Adjustment or Cost in Consensus Reaching in Group Decision Making: Research Paradigms and Challenges. Inf. Fusion 2020, 60, 65–79. [Google Scholar] [CrossRef]
  54. Sapiano, N.J.; Hester, P.T. Systemic Analysis of a Drug Trafficking Mess. Int. J. Syst. Syst. Eng. 2019, 9, 277–306. [Google Scholar] [CrossRef]
  55. Jiao, W.; Wang, L.; McCabe, M.F. Multi-Sensor Remote Sensing for Drought Characterization: Current Status, Opportunities and a Roadmap for the Future. Remote Sens. Environ. 2021, 256, 112313. [Google Scholar] [CrossRef]
  56. Keeney, R.L. Multiplicative Utility Functions. Oper. Res. 1974, 22, 22–34. [Google Scholar]
  57. Tervonen, T.; van Valkenhoef, G.; Baştürk, N.; Postmus, D. Hit-and-Run Enables Efficient Weight Generation for Simulation-based Multiple Criteria Decision Analysis. Eur. J. Oper. Res. 2013, 224, 552–559. [Google Scholar] [CrossRef]
  58. Zanakis, S.H.; Solomon, A.; Wishart, N.; Dublish, S. Multi-Attribute Decision Making: A Simulation Comparison of Select Methods. Eur. J. Oper. Res. 1998, 107, 507–529. [Google Scholar] [CrossRef]
  59. Von Nitzsch, R.; Weber, M. The effect of attribute ranges on weights in multiattribute utility measurements. Manag. Sci. 1993, 39, 937–943. [Google Scholar]
  60. Borcherding, K.; Eppel, T.; Von Winterfeldt, D. Comparison of weighting judgments in multiattribute utility measurement. Manag. Sci. 1991, 37, 1603–1619. [Google Scholar]
  61. Stillwell, W.; Seaver, D.; Edwards, W. A comparison of weight approximation techniques in multiattribute utility decision making. Organ. Behav. Hum. Perform. 1981, 28, 62–77. [Google Scholar]
  62. Pöyhönen, M.; Vrolijk, H.; Hämäläinen, R.P. Behavioral and procedural consequences of structural variation in value trees. Eur. J. Oper. Res. 2001, 134, 216–227. [Google Scholar] [CrossRef]
  63. Miller, G.A. The Magical Number Seven, Plus or Minus Two: Some Limits on Our Capability for Processing Information. Psychol. Rev. 1956, 63, 81–97. [Google Scholar] [CrossRef] [Green Version]
  64. Stillwell, W.G.; von Winterfeldt, D.; John, R.S. Comparing hierarchical and non-hierarchical weighting methods for eliciting multiattribute value models. Manag. Sci. 1987, 33, 442–450. [Google Scholar] [CrossRef]
  65. Pöyhönen, M. On Attribute Weighting in Value Trees. Ph.D. Thesis, Helsinki University of Technology, Espoo, Finland, 1998. [Google Scholar]
  66. Choo, E.U.; Schoner, B.; Wedley, W.C. Interpretation of criteria weights in multicriteria decision making. Comput. Ind. Eng. 1999, 37, 527–541. [Google Scholar] [CrossRef]
  67. Fischer, G.W. Range sensitivity of attribute weights in multiattribute value models. Organ. Behav. Hum. Decis. Process. 1995, 62, 252–266. [Google Scholar]
  68. Korhonen, P.; Wallenius, J. Behavioral Issues in MCDM: Neglected research questions. J. Multicriteria Decis. Anal. 1996, 5, 178–182. [Google Scholar] [CrossRef]
  69. Belton, V.; Gear, T. On a short-coming of Saaty’s method of analytic hierarchies. Omega 1983, 3, 228–230. [Google Scholar]
  70. Salo, A.A.; Hämäläinen, R.P. On the measurement of preferences in the Analytic Hierarchy Process. J. Multicriteria Decis. Anal. 1997, 6, 309–343. [Google Scholar] [CrossRef]
  71. Edwards, W. How to use multiattribute utility measurement for social decisionmaking. IEEE Trans. Syst. Man Cybern. 1977, 7, 326–340. [Google Scholar] [CrossRef]
  72. Von Winterfeldt, D.; Edwards, W. Decision Analysis and Behavioral Research; Cambridge University Press: Cambridge, MA, USA, 1986. [Google Scholar]
  73. Edwards, W.; Barron, F. SMARTS and SMARTER: Improved simple methods for multiattribute utility measurement. Organ. Behav. Hum. Decis. Process. 1994, 60, 306–325. [Google Scholar]
  74. Saaty, T.L. The Analytic Hierarchy Process; McGraw Hill: New York, NY, USA, 1980. [Google Scholar]
  75. Wallenius, J.; Dyer, J.S.; Fishburn, P.C.; Steuer, R.E.; Zionts, S.; Deb, K. Multiple Criteria Decision Making, Multiattribute Utility Theory: Recent Accomplishments and What Lies Ahead. Manag. Sci. 2008, 54, 1339–1340. [Google Scholar]
  76. Velasquez, M.; Hester, P.T. An analysis of multi-criteria decision making methods. Int. J. Oper. Res. 2013, 10, 56–66. [Google Scholar]
  77. Dyer, J.S. Remarks on the Analytic Hierarchy Process. Manag. Sci. 1990, 35, 249–258. [Google Scholar] [CrossRef]
  78. Jia, J.; Fischer, G.W.; Dyer, J.S. Attribute weighting methods and decision quality in the presence of response error: A simulation study. J. Behav. Decis. Mak. 1998, 11, 85–105. [Google Scholar]
  79. Kapur, J.N. Maximum Entropy Principles in Science and Engineering; New Age: New Dehli, India, 2009. [Google Scholar]
  80. Barron, F.; Barrett, B. Decision quality using ranked attribute weights. Manag. Sci. 1996, 42, 1515–1523. [Google Scholar]
  81. U.S. Coast Guard. Coast Guard Process Improvement Guide: Total Quality Tools for Teams and Individuals, 2nd ed.; U.S. Government Printing Office: Boston, MA, USA, 1994.
  82. Lynch, C.J.; Diallo, S.Y.; Kavak, H.; Padilla, J.J. A Content Analysis-based Approach to Explore Simulation Verification and Identify its Current Challenges. PLoS ONE 2020, 15, e0232929. [Google Scholar] [CrossRef]
  83. Diallo, S.Y.; Gore, R.; Lynch, C.J.; Padilla, J.J. Formal Methods, Statistical Debugging and Exploratory Analysis in Support of System Development: Towards a Verification and Validation Calculator Tool. Int. J. Model. Simul. Sci. Comput. 2016, 7, 1641001. [Google Scholar] [CrossRef] [Green Version]
  84. Axelrod, R. Advancing the Art of Simulation in the Social Sciences. Complexity 1997, 3, 16–22. [Google Scholar] [CrossRef] [Green Version]
  85. Sterman, J.D. Deterministic chaos in models of human behavior: Methodological issues and experimental results. Syst. Dyn. Rev. 1988, 4, 148–178. [Google Scholar]
  86. Fortmann-Roe, S. Insight Maker: A General-Purpose Tool for Web-based Modeling & Simulation. Simul. Model. Pract. Theory 2014, 47, 28–45. [Google Scholar] [CrossRef] [Green Version]
  87. Padilla, J.J.; Diallo, S.Y.; Barraco, A.; Kavak, H.; Lynch, C.J. Cloud-Based Simulators: Making Simulations Accessible to Non-Experts and Experts Alike. In Proceedings of the 2014 Winter Simulation Conference, Savanah, GA, USA, 7–10 December 2014; pp. 3630–3639. [Google Scholar]
  88. Lynch, C.J.; Padilla, J.J.; Diallo, S.Y.; Sokolowski, J.A.; Banks, C.M. A Multi-Paradigm Modeling Framework for Modeling and Simulating Problem Situations. In Proceedings of the 2014 Winter Simulation Conference, Savanah, GA, USA, 7–10 December 2014; pp. 1688–1699. [Google Scholar]
  89. Lynch, C.J.; Diallo, S.Y. A Taxonomy for Classifying Terminologies that Describe Simulations with Multiple Models. In Proceedings of the 2015 Winter Simulation Conference, Huntington Beach, CA, USA, 6–9 December 2015; pp. 1621–1632. [Google Scholar]
  90. Tolk, A.; Diallo, S.Y.; Padilla, J.J.; Herencia-Zapana, H. Reference Modelling in Support of M&S—Foundations and Applications. J. Simul. 2013, 7, 69–82. [Google Scholar] [CrossRef]
  91. MacKenzie, G.R.; Schulmeyer, G.G.; Yilmaz, L. Verification technology potential with different modeling and simulation development and implementation paradigms. In Proceedings of the Foundations for V&V in the 21st Century Workshop, Laurel, MD, USA, 22–24 October 2002; pp. 1–40. [Google Scholar]
  92. Eldabi, T.; Balaban, M.; Brailsford, S.; Mustafee, N.; Nance, R.E.; Onggo, B.S.; Sargent, R. Hybrid Simulation: Historical Lessons, Present Challenges and Futures. In Proceedings of the 2016 Winter Simulation Conference, Arlington, VA, USA, 11–14 December 2016; pp. 1388–1403. [Google Scholar]
  93. Vangheluwe, H.; De Lara, J.; Mosterman, P.J. An Introduction to Multi-Paradigm Modelling and Simulation. In Proceedings of the AIS’2002 Conference (AI, Simulation and Planning in High Autonomy Systems), Lisboa, Portugal, 7–10 April 2002; pp. 9–20. [Google Scholar]
  94. Balaban, M.; Hester, P.; Diallo, S. Towards a Theory of Multi-Method M&S Approach: Part I. In Proceedings of the 2014 Winter Simulation Conference, Savanah, GA, USA, 7–10 December 2014; pp. 1652–1663. [Google Scholar]
  95. Bonabeau, E. Agent-based modeling: Methods and techniques for simulating human systems. Proc. Natl. Acad. Sci. USA. 2002, 99 (Suppl. S3), 7280–7287. [Google Scholar] [CrossRef] [Green Version]
  96. Epstein, J.M. Agent_Zero: Toward Neurocognitive Foundations for Generative Social Science; Princeton University Press: Princeton, NJ, USA, 2014. [Google Scholar]
  97. Shults, F.L.; Lane, J.E.; Wildman, W.J.; Diallo, S.; Lynch, C.J.; Gore, R. Modelling terror management theory: Computer simulations of the impact of mortality salience on religiosity. Relig. Brain Behav. 2018, 8, 77–100. [Google Scholar] [CrossRef]
  98. Lemos, C.M.; Gore, R.; Lessard-Phillips, L.; Shults, F.L. A network agent-based model of ethnocentrism and intergroup cooperation. Qual. Quant. 2019, 54, 463–489. [Google Scholar] [CrossRef] [Green Version]
  99. Knoeri, C.; Nikolic, I.; Althaus, H.-J.; Binder, C.R. Enhancing recycling of construction materials: An agent based model with empirically based decision parameters. J. Artif. Soc. Soc. Simul. 2014, 17, 1–13. [Google Scholar] [CrossRef] [Green Version]
  100. Axelrod, R. An evolutionary approach to norms. Am. Political Sci. Rev. 1986, 80, 1095–1111. [Google Scholar] [CrossRef] [Green Version]
  101. Santos, F.P.; Santos, F.C.; Pacheco, J.M. Social Norms of Cooperation in Small-Scale Societies. PLoS Comput. Biol. 2016, 12, e1004709. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  102. Borshchev, A. The Big Book of Simulation Modeling: Multimethod Modeling with AnyLogic 6; AnyLogic North America: Oakbrook Terrace, IL, USA, 2013; 612p. [Google Scholar]
  103. Schriber, T.J.; Brunner, D.T.; Smith, J.S. Inside Discrete-Event Simulation Software: How it Works and Why it Matters. In Proceedings of the 2013 Winter Simulation Conference, Washington, DC, USA, 8–11 December 2013; pp. 424–438. [Google Scholar]
  104. Padilla, J.J.; Lynch, C.J.; Kavak, H.; Evett, S.; Nelson, D.; Carson, C.; del Villar, J. Storytelling and Simulation Creation. In Proceedings of the 2017 Winter Simulation Conference, Las Vegas, NV, USA, 3–6 December 2017; pp. 4288–4299. [Google Scholar]
  105. Tanrıöver, Ö.Ö.; Bilgen, S. UML-Based Conceptual Models and V&V. In Conceptual Modeling for Discrete Event Simulation; Robinson, S., Brooks, R., Kotiadis, K., van Der Zee, D.-J., Eds.; CRC Press: Boca Raton, FL, USA, 2010; pp. 383–422. [Google Scholar]
  106. Pegden, C.D. Introduction to SIMIO. In Proceedings of the 2008 Winter Simulation Conference, Piscataway, NJ, USA, 7–10 December 2008; pp. 229–235. [Google Scholar]
  107. Taylor, S.; Robinson, S. So Where to Next? A Survey of the Future for Discrete-Event Simulation. J. Simul. 2006, 1, 1–6. [Google Scholar] [CrossRef]
  108. Eldabi, T.; Irani, Z.; Paul, R.J.; Love, P.E. Quantitative and Qualitative Decision-Making Methods in Simulation Modelling. Manag. Decis. 2002, 40, 64–73. [Google Scholar] [CrossRef]
  109. Jones, J.W.; Secrest, E.L.; Neeley, M.J. Computer-based Support for Enhanced Oil Recovery Investment Decisions. Dynamica 1980, 6, 2–9. [Google Scholar]
  110. Mosekilde, E.; Larsen, E.R. Deterministic Chaos in the Beer Production-Distribution Model. Syst. Dyn. Rev. 1988, 4, 131–147. [Google Scholar] [CrossRef]
  111. Al-Qatawneh, L.; Hafeez, K. Healthcare logistics cost optimization using a multi-criteria inventory classification. In Proceedings of the International Conference on Industrial Engineering and Operations Management, Kuala Lumpur, Malaysia, 22–24 January 2011; pp. 506–512. [Google Scholar]
  112. Araz, O.M. Integrating Complex System Dynamics of Pandemic Influenza with a Multi-Criteria Decision Making Model for Evaluating Public Health Strategies. J. Syst. Sci. Syst. Eng. 2013, 22, 319–339. [Google Scholar] [CrossRef]
  113. Mendoza, G.A.; Prabhu, R. Combining Participatory Modeling and Multi-Criteria Analysis for Community-based Forest Management. For. Ecol. Manag. 2005, 207, 145–156. [Google Scholar] [CrossRef]
  114. Rebs, T.; Brandenburg, M.; Seuring, S. System Dynamics Modeling for Sustainable Supply Chain Management: A Literature Review and Systems Thinking Approach. J. Clean. Prod. 2019, 208, 1265–1280. [Google Scholar] [CrossRef]
  115. Kavak, H.; Vernon-Bido, D.; Padilla, J.J. Fine-Scale Prediction of People’s Home Location using Social Media Footprints. In Proceedings of the 2018 International Conference on Social Computing, Behavioral-Cultural Modling, & Prediction and Behavior Representation in Modeling and Simulation, Washington, DC, USA, 10–13 July 2018; pp. 1–6. [Google Scholar]
  116. Padilla, J.J.; Kavak, H.; Lynch, C.J.; Gore, R.J.; Diallo, S.Y. Temporal and Spatiotemporal Investigation of Tourist Attraction Visit Sentiment on Twitter. PLoS ONE 2018, 13, e0198857. [Google Scholar] [CrossRef] [Green Version]
  117. Gore, R.; Diallo, S.Y.; Padilla, J.J. You are what you Tweet: Connecting the Geographic Variation in America’s Obesity Rate to Twitter Content. PLoS ONE 2015, 10, e0133505. [Google Scholar] [CrossRef] [Green Version]
  118. Meza, X.V.; Yamanaka, T. Food Communication and its Related Sentiment in Local and Organic Food Videos on YouTube. J. Med. Internet Res. 2020, 22, e16761. [Google Scholar] [CrossRef] [PubMed]
Figure 1. Taxonomy of Multi-Criteria Decision Analysis techniques.
Figure 1. Taxonomy of Multi-Criteria Decision Analysis techniques.
Applsci 11 10397 g001
Table 1. Criteria for decision problem.
Table 1. Criteria for decision problem.
AbbreviationCriteriaLeast PreferredMost
Preferred
Score
(P)Purchase Price$30,000$15,000400 points
(R)Reliability (Initial Owner complaints)15010300 points
(S)Safety3 star5 star150 points
(A)Attractiveness (qualitative)LowHigh100 points
(G)Gas Mileage20 mpg30 mpg50 points
(P)Purchase Price$30,000$15,000400 points
Table 2. DAT weight evaluation.
Table 2. DAT weight evaluation.
AbbreviationCriteriaFormulaWeight
(P)Purchase Price400/1000=0.40
(R) Reliability300/1000=0.30
(S)Safety150/1000=0.15
(A)Attractiveness100/1000=0.10
(G)Gas Mileage50/1000=0.05
Sum1000 points=1.00
Table 3. Rank ordering of decision criteria.
Table 3. Rank ordering of decision criteria.
AbbreviationCriteriaFormula
(G)Gas Mileage1
(A)Attractiveness2
(S)Safety3
(R)Reliability4
(P)Purchase Price5
Table 4. Total points for decision criteria.
Table 4. Total points for decision criteria.
AbbreviationCriteriaPointsTotal Points
(G)Gas Mileage50=50
(A)Attractiveness50=100
(S)Safety100=150
(R)Reliability250=300
(P)Purchase Price350=400
Table 5. Weight calculation for SMART.
Table 5. Weight calculation for SMART.
AbbreviationCriteriaFormulaWeight
(G)Gas Mileage50/1000=0.050
(A)Attractiveness100/1000=0.100
(S)Safety150/1000=0.150
(R)Reliability300/1000=0.300
(P)Purchase Price400/1000=0.400
Sum1000 points=1.00
Table 6. Ordinal ranking for SWING.
Table 6. Ordinal ranking for SWING.
AbbreviationCriteriaOrdinal Ranking
(P)Purchase Price100
(R)Reliability75
(S)Safety37.5
(A)Attractiveness25
(G)Gas Mileage12.5
Table 7. Weight calculation for SWING.
Table 7. Weight calculation for SWING.
AbbreviationCriteriaFormulaWeight
(P)Purchase Price100/250=0.400
(R)Reliability75/250=0.300
(S)Safety37.5/250=0.150
(A)Attractiveness25/250=0.100
(G)Gas Mileage12.5/250=0.050
Sum250 points=1.00
Table 8. Point calculation for pairwise method.
Table 8. Point calculation for pairwise method.
AbbreviationCriteriaPoints
(P)Purchase Price4 points
(R)Reliability3 points
(S)Safety2 points
(A)Attractiveness1 point
(G)Gas Mileage0 points
Table 9. Point calculation for pairwise method using offsets.
Table 9. Point calculation for pairwise method using offsets.
AbbreviationCriteriaPoints (2/10 Offset)
(P)Purchase Price6 points/14 points
(R)Reliability5 points/13 points
(S)Safety4 points/12 points
(A)Attractiveness3 point/11 points
(G)Gas Mileage2 points/10 points
Table 10. Weighs for pairwise method.
Table 10. Weighs for pairwise method.
AbbreviationCriteriaFormulaWeight
(P)Purchase Price4/10=0.4
(R)Reliability3/10=0.3
(S)Safety2/10=0.2
(A)Attractiveness1/10=0.1
(G)Gas Mileage0/10=0.0
Sum10 points=1.00
Table 11. Weighs for pairwise method using offsets.
Table 11. Weighs for pairwise method using offsets.
AbbreviationCriteriaFormulaWeight
(P)Purchase Price6 points/14 points=0.30/0.233
(R)Reliability5 points/13 points=0.25/0.217
(S)Safety4 points/12 points=0.20/0.20
(A)Attractiveness3 point/11 points=0.15/0.183
(G)Gas Mileage2 points/10 points=0.10/0.167
Sum20 points/60 points=1.00
Table 12. Ordinal criteria ranking.
Table 12. Ordinal criteria ranking.
AbbreviationCriteriaOrdinal Ranking with Index
(P)Purchase Pricei = 1
(R) Reliabilityi = 2
(S)Safetyi = 3
(A)Attractivenessi = 4
(G)Gas Mileagei = 5
Table 13. Weight using ROC technique.
Table 13. Weight using ROC technique.
AbbreviationCriteriaFormulaWeight
(P)Purchase Pricew1 = 1/5 (1 + 1/2 + 1/3 + 1/4 + 1/5)=0.457
(R)Reliabilityw2 = 1/5 (1/2 + 1/3 + 1/4 + 1/5)=0.257
(S)Safetyw3 = 1/5 (1/3 + 1/4 + 1/5)=0.157
(A)Attractivenessw4 = 1/5 (1/4 + 1/5)=0.090
(G)Gas Mileagew5 = 1/5 (1/5)=0.040
Sumw1 + w2 + w3 + w4 + w5~1.00
Table 14. Weight using RS technique.
Table 14. Weight using RS technique.
AbbreviationCriteriaFormulaWeight
(P)Purchase Pricew1 = (2 (5 + 1 − 1))/(5 (5 + 1))=0.333
(R)Reliabilityw2 = (2 (5 + 1 − 2))/(5 (5 + 1))=0.267
(S)Safetyw3 = (2 (5 + 1 − 3))/(5 (5 + 1))=0.200
(A)Attractivenessw4 = (2 (5 + 1 − 4))/(5 (5 + 1))=0.133
(G)Gas Mileagew5 = (2 (5 + 1 − 5))/(5 (5 + 1))=0.067
Sumw1 + w2 + w3 + w4 + w5=1.00
Table 15. Weight using RR technique.
Table 15. Weight using RR technique.
AbbreviationCriteriaFormulaWeight
(P)Purchase Pricew1 = 1/(i × k = 1 N 1 / k ) =
1/(1 × ((1/1)+(1/2)+(1/3)+(1/4)+(1/5)))
=0.438
(R)Reliabilityw2 = 1/(i × k = 1 N 1 / k ) =
1/(2 × ((1/1)+(1/2)+(1/3)+(1/4)+(1/5)))
=0.218
(S)Safetyw3 = 1/(i × k = 1 N 1 / k ) =
1/(3 × ((1/1)+(1/2)+(1/3)+(1/4)+(1/5)))
=0.146
(A)Attractivenessw4 = 1/(i × k = 1 N 1 / k ) =
1/(4 × ((1/1)+(1/2)+(1/3)+(1/4)+(1/5)))
=0.109
(G)Gas Mileagew5 = 1/(i × k = 1 N 1 / k ) =
1/(5 × ((1/1)+(1/2)+(1/3)+(1/4)+(1/5)))
=0.088
Sumw1 + w2 + w3 + w4 + w5~1.00
Table 16. Characteristics of weighting methods for ratio assignment techniques.
Table 16. Characteristics of weighting methods for ratio assignment techniques.
MethodAdvantagesDisadvantagesUses
Direct assignment techniqueStraightforwardMust be repeated if
attributes change
Sensitive to reference point
Situations in which attributes
have clear separation
in terms of importance
Effort scales linearly
with the number of attributes
Easily implemented with
spreadsheet or calculator
Simple multi attribute rating technique (SMART)/SMARTER/SMARTSAttributes can change without
redoing assessment
Attribute value ranges influence weightsSituations in which attributes
have clear separation
in terms of importance
Effort scales linearly with
number of attributes
Scenarios where scales
for attributes are clear
Greater weight diversity
than SWING
Swing weightingAttributes can change
without redoing assessment
Limited number of weights availableSituations in which attributes
have clear separation
in terms of importance
Effort scales linearly
with number of attributes
Scenarios where scales
for attributes are clear
Simple pairwise
comparison
Low effortDoes not prevent weight inconsistencySituations in which attributes
have clear separation
in terms of importance
Scenarios where scales
for attributes are clear
Table 17. Characteristics of weighting methods for approximate techniques.
Table 17. Characteristics of weighting methods for approximate techniques.
MethodAdvantagesDisadvantagesUses
Equal weightingEasiest of all methodsFew if any real world scenarios have all attributes of
equal importance
Early in the decision process
Easily implemented with spreadsheet or calculatorInaccurate relative to
other techniques
Situations with incomplete
or no attribute information
Scenarios where a large number
of attributes are present
Rank Ordered CentroidUses ordinal ranking only to determine weightsBased on uniform distributionAnalyst is unwilling
to assign specifics weights
Easily implemented with spreadsheet or calculator Scenarios when consensus may not be necessary or desirable,
but ranking can be agreed upon [80]
Scenarios where a large
number of attributes are present
Rank SumUses ordinal ranking only to determine weightsBased on uniform distributionAnalyst is unwilling to
assign specifics weights
Easily implemented with spreadsheet or calculator Scenarios when consensus may not be necessary or desirable, but ranking can be agreed upon [80]
Scenarios where a large
number of attributes are present
Rank ReciprocalUses ordinal ranking only to determine weightsOnly useful when more precise weighting is not availableAnalyst is unwilling to
assign specific weights
Easily implemented with spreadsheet or calculator Scenarios when consensus may not be necessary or desirable,
but ranking can be agreed upon [80]
Scenarios where a large
number of attributes are present
Table 18. Considerations for incorporating ratio assignment techniques into ABM, DES, and SD modeling paradigms.
Table 18. Considerations for incorporating ratio assignment techniques into ABM, DES, and SD modeling paradigms.
Ratio
Assignment Technique
Agent Based ModelingDiscrete Event SimulationSystem Dynamics
Direct assignment
technique
Known * or accepted ^ criteria that direct an agent towards their goals or one decision outcome or anotherKnown or accepted decision path probabilities; Known or accepted resource schedulesKnown or accepted coefficient values within ordinary differential equation (ODE), partial differential equation (PDE) or difference equation (DE)
Simple multi attribute rating technique (SMART)/
SMARTER/
SMARTS
There exists an accepted
least important criterion and the
remaining criteria are weighted relative to this option. Each agent population may utilize
difference weighting
preferences.
A least acceptable path is known and the remaining options are weighted relative to this option. Weighting preferences can vary by entity type.The ODE, PDE, or DE contains a value whose coefficient is known to be least important. Remaining
coefficients are weighted relative to this coefficient.
Swing weightingOrder of importance is known/accepted but the most important element is not always the top ranked. Current rankings and known important criterion are used to establish weightings of remaining criteria.Top ranked path or most
desirable schedule are known but do not always remain top ranked during execution.
Selections are made relative to the known choice based on its current ranking.
Coefficient weightings are intended to weight towards a specified most important criterion; however, new weights are generated based on
magnitude of change from previous check to incorporate stochasticity.
Simple pairwise comparisonNo established known or
accepted ranking of criteria weightings. Agent compares all available criteria to accumulate weighting scores.
No established known or
accepted ranking of criteria weightings. Entities or resources compare all available criteria to accumulate weighting scores for path probabilities or scheduling.
No established known or accepted ranking of criteria (e.g., coefficient) weightings. Equation coefficient weightings accumulate based on comparisons of all criteria.
* The term known reflects that a weighting is supported by empirical evidence. ^ The term accepted reflects general agreement among the model’s builders and stakeholders.
Table 19. Considerations for incorporating Approximate Techniques into ABM, DES, and SD modeling paradigms.
Table 19. Considerations for incorporating Approximate Techniques into ABM, DES, and SD modeling paradigms.
Approximate TechniqueAgent Based ModelingDiscrete Event SimulationSystem Dynamics
Equal WeightingAgent decision criterion is
assumed of equal importance. This technique may be applicable in cases where the use of the uniform distribution for
sampling is appropriate.
Path selection or resource
selection is assumed of equal
importance. This technique may be applicable in cases where the use of the uniform distribution for sampling is appropriate.
Values of coefficient weightings are assumed of equal importance.
Rank Ordered Centroid TechniqueOrder of importance of decision criterion are based on the
aggregate orderings from each agent and update over time.
Resource schedules depend on aggregate rankings of criterion from the entities or resources which change as resource
availabilities (e.g., through schedules) change or as aggregated weight and processing times change.
Values of coefficient weightings are based on the aggregate performance of stock or auxiliary variable
performance over time.
Rank Sum TechniqueWeightings are based on
aggregated rankings of importance from each agent based on a utility function.
Weightings are based on aggregated rankings of importance from each entity over time based on a utility function.Weightings are based on aggregated rankings of importance of stocks or auxiliary variables over time based on a utility function.
Rank ReciprocalWeightings are based on aggregated rankings of importance from each agent based on
preference.
Weightings are based on
aggregated rankings of preferred importance from each
entity per entity type.
Weightings are based on aggregated rankings of importance of stocks or auxiliary variables over time based on preference.
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Ezell, B.; Lynch, C.J.; Hester, P.T. Methods for Weighting Decisions to Assist Modelers and Decision Analysts: A Review of Ratio Assignment and Approximate Techniques. Appl. Sci. 2021, 11, 10397. https://doi.org/10.3390/app112110397

AMA Style

Ezell B, Lynch CJ, Hester PT. Methods for Weighting Decisions to Assist Modelers and Decision Analysts: A Review of Ratio Assignment and Approximate Techniques. Applied Sciences. 2021; 11(21):10397. https://doi.org/10.3390/app112110397

Chicago/Turabian Style

Ezell, Barry, Christopher J. Lynch, and Patrick T. Hester. 2021. "Methods for Weighting Decisions to Assist Modelers and Decision Analysts: A Review of Ratio Assignment and Approximate Techniques" Applied Sciences 11, no. 21: 10397. https://doi.org/10.3390/app112110397

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop