Next Article in Journal
Research on Projection Filtering Method Based on Projection Symmetric Interval and Its Application in Underwater Navigation
Next Article in Special Issue
Hesitant Fuzzy Linear Regression Model for Decision Making
Previous Article in Journal
A Fault Diagnosis and Prognosis Method for Lithium-Ion Batteries Based on a Nonlinear Autoregressive Exogenous Neural Network and Boxplot
Previous Article in Special Issue
Wastewater Plant Reliability Prediction Using the Machine Learning Classification Algorithms
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Comparative Analysis of Multi-Criteria Decision-Making Methods for Resource Selection in Mobile Crowd Computing

by
Pijush Kanti Dutta Pramanik
1,
Sanjib Biswas
2,
Saurabh Pal
1,
Dragan Marinković
3,* and
Prasenjit Choudhury
1
1
Department of Computer Science & Engineering, National Institute of Technology, Durgapur 713209, India
2
Decision Sciences, Operations & Information Systems, Calcutta Business School, Kolkata 743503, India
3
Faculty of Mechanical and Transport Systems, Technische Universität Berlin, 10623 Berlin, Germany
*
Author to whom correspondence should be addressed.
Symmetry 2021, 13(9), 1713; https://doi.org/10.3390/sym13091713
Submission received: 27 July 2021 / Revised: 6 September 2021 / Accepted: 10 September 2021 / Published: 16 September 2021
(This article belongs to the Special Issue Uncertain Multi-Criteria Optimization Problems II)

Abstract

:
In mobile crowd computing (MCC), smart mobile devices (SMDs) are utilized as computing resources. To achieve satisfactory performance and quality of service, selecting the most suitable resources (SMDs) is crucial. The selection is generally made based on the computing capability of an SMD, which is defined by its various fixed and variable resource parameters. As the selection is made on different criteria of varying significance, the resource selection problem can be duly represented as an MCDM problem. However, for the real-time implementation of MCC and considering its dynamicity, the resource selection algorithm should be time-efficient. In this paper, we aim to find out a suitable MCDM method for resource selection in such a dynamic and time-constraint environment. For this, we present a comparative analysis of various MCDM methods under asymmetric conditions with varying selection criteria and alternative sets. Various datasets of different sizes are used for evaluation. We execute each program on a Windows-based laptop and also on an Android-based smartphone to assess average runtimes. Besides time complexity analysis, we perform sensitivity analysis and ranking order comparison to check the correctness, stability, and reliability of the rankings generated by each method.

1. Introduction

The trend in the miniaturization of electronics has paved the way for smart mobile devices (SMDs) to be incorporated with significant computing capabilities. They are being loaded with several processing cores, specialized processors for different purposes, sizeable memory, and fat batteries. This has prompted users to prefer SMDs, which include smartphones and tablets, as the primary computing device leaving behind desktops and laptops. In general, though the SMDs are used frequently, they are not being used most of the time but their owners. The SMDs’ processing units are discretely utilized only for a few hours a day, on average [1,2,3]. The rest of the time, the processing modules remain idle, so a significant computing resource has been wasted. These wasted computing cycles can be utilized by lending them to more needy applications which are in want of extra computing resources to carry out some computing-intensive task [4,5,6,7]. If a collection of such unused computing resources is connected cumulatively, it can deliver an economical and sustainable HPC environment [8,9,10].

1.1. Mobile Crowd Computing

In mobile crowd computing (MCC), public-owned SMDs are used as computing resources [11]. The increasing use of SMDs has fueled the possibilities of MCC to a great extent. An estimation by Statista, a leading market and consumer data provider, suggests that the number of global smartphone users will reach 4.3 billion in 2023 from 3.8 billion in 2021 [12]. Due to this widescale SMD user base, there is a great probability of finding a sufficient number of SMDs not only at a populous place but also at scantily crowded locations. Therefore, due to the infrastructural flexibility and the omnipresence of SMDs, an ad-hoc HPC can be formed anywhere, allowing to achieve on-demand pervasive and ubiquitous computing [13]. And in the waking of the IoT and the IoE, the need for local processing is growing [14] because most of these applications are time-constrained and cannot afford to send data to a remote cloud for processing [15]. MCC can offer a local computing facility to these applications as ad-hoc mobile cloud computing [16,17,18] and as edge computing [19,20,21]. Besides the ad-hoc use of MCC, it can always be used for organizational computing infrastructure by making use of the in-house SMDs.

1.2. Resource Selection in Mobile Crowd Computing

The effectivity (e.g., response time, throughput, turnaround time, etc.) and reliability (e.g., fault tolerance, ensuring resource availability, device mobility handling, minimized hands-off, etc.) of MCC largely depend on selecting the right resources for job scheduling. That is why it is very crucial to select the most suitable resources among the currently available ones [22]. In this paper, we considered only the computing resources of the SMDs as selection criteria. Among others, the computing capability is one of the most important selection criteria as this would eventually influence the response time, throughput, and turnaround time for any given task. However, selecting SMDs based on their computing factors, which are conflicting in nature, is non-trivial.
As mentioned earlier, there might be quite many SMDs available at a certain place (local MCC, connected through a WLAN or other short-range communication means) [23,24] or for a certain application (global MCC, connected through the internet) to be considered as computing resources [25,26,27]. Among this sizable pool of resources, which of them would be most suitable? The selection problem has been aggravated by the fact that the number of SMD makers launch different devices with a variety of hardware resources regularly. Hence, in most of the cases, the available SMDs in an MCC would be vastly heterogeneous in terms of hardware (e.g., CPU & GPU clock frequency, number of cores, primary memory size, secondary memory size, battery capacity, etc.); and with different specifications, the SMDs boast varying computing capacities [28].
Along with the hardware specifications of the SMDs, another aspect is needed to be considered while selecting an SMD as a computing device—the present status of different resources of an SMD such as CPU & GPU load, available memory, available battery, signal strength, etc. Irrespective of their actual capacity, the resource usability depends on their actual availability. To elaborate this, let us consider the following scenario:
Two SMDs, M1 and M2, have the CPU frequencies 1.8 GHz and 2.2 GHz, respectively. Their present CPU loads are 30% and 90%, respectively. In this case, though M2 has a more capable CPU, as an immediate computing resource, M1 would be preferable because it has a much lower CPU load, i.e., it is more usable than M2.
The values of these variable parameters change depending on the SMD usage by its user. That is why, instead of selecting the SMDs based only on the hardware specifications, the current status of these parameters is needed to be considered. For a better QoS of MCC, it is crucial to select the most suitable SMDs with the best usable resources to offer at the moment of job submission and during its execution.
In general, considering all these diverse specifications, selecting the right SMD or a set of SMDs, in terms of computing resources, among many available SMDs in the MCC network, can be considered an MCDM problem.

1.3. Resource Selection as an MCDM Problem

Deciding on the best candidates from some alternatives based on multiple pre-set criteria is known as the MCDM problem. Suppose there is a finite set of distinct alternatives {A1, A2, …, An}. The alternatives are evaluated using a set of criteria {C1, C2, …, Cm}. A performance score pik is calculated for each alternative Aii = 1, 2, …, n with respect to the criterion Cjj = 1, 2, …, m. Based on the calculated performance scores, an MCDM method orders the alternatives from the best to the worst. Here, the alternatives are homogeneous in nature, but the criteria may not be. They can be expressed in different units which do not have any apparent interrelationship. The criteria may be conflicting to each other; i.e., some may have maximizing objectives while others have minimizing objectives. The criteria may also have some weight, signifying their importance in the decision-making [29]. The common stages of a typical MCDM method are shown in Figure 1.
In our SMD selection problem, the alternatives are the SMDs available in the MCC at the time of job submission, and the criteria are different parameters considered for SMD selection (e.g., CPU frequency, RAM, CPU load, etc.). The MCDM solutions provide a ranking of the available SMDs based on the selection criteria. From this ranked list, the resource management module of the MCC selects the top-ranked SMD(s) for job scheduling.
Over the years, several algorithms have been developed which contributed significantly to the evolution of the expanding field of MCDM. These methods differ in terms of their computational logic and assumption, applicability, calculation complexities, and ability to withstand variations in the given conditions. Table 1 lists some of the popular MCDM approaches and the most noteworthy representatives of each approach.

1.4. Paper Objective

Though the resource selection problem in MCC is an ideal MCDM problem, we could not find any significant work on this topic. In fact, MCDM is not sufficiently explored to solve the resource selection problems in analogous distributed computing systems. As discussed in Section 2, very few works have attempted using MCDM methods for resource selection in the allied domains such as grid computing, cloud computing, and mobile cloud.
However, witnessing the wide-scale applications of MCDM, especially in decision-making problems, we believe that it can also offer promising solutions for resource selection in MCC and other similar computing systems, which is not explored so far. For real-time resource selection in a dynamic environment like MCC, adopting the MCDM approach that provides consistent and considerably accurate SMD selection decisions is necessary, balancing various parameters at a reasonable time complexity. In view of that, the key objective of this paper is to find out, among several existing MCDM methods, which one would be the most suitable for this particular problem scenario.
In this paper, we aim to assess and compare the performance of different MCDM methods in selecting SMDs as computing resources in MCC. The comparative assessment is made in terms of the correctness and robustness of the SMD rankings given by each method and the precise run-time of each method.

1.5. Paper Contribution

This paper presents a comparative study of five MCDM methods under asymmetric conditions with varying criteria and alternative sets for resource selection in MCC. The followings are the main contributions of the paper:
  • We use five distinct MCDM algorithms for the comparative analysis—EDAS, ARAS, MABAC, COPRAS, and MARCOS.
  • The five algorithms that are used in this study are of distinctive nature in terms of their fundamental procedure. Moreover, the combination of the considered MCDM methods comprises some popularly used methods and some recently proposed methods. This diverse combination for a comparative study of MCDM methods is quite rare in the literature.
  • To check the impact of the number of alternatives and criteria on the performance of the MCDM methods, we consider four data sets of different sizes. Each of the methods is implemented on all four datasets.
  • We carry out an extensive comparative analysis of the results for all the considered scenarios under different variations of criteria and alternative sets. The comparative analysis is done on two aspects: (a) an exhaustive validation and robustness check and (b) the time complexity of each method.
  • Along with the time complexity of each MCDM method, the actual runtime of each method on two different types of devices (laptop and smartphone) are compared and analyzed for each considered scenario.
  • We found hardly any work in which computational and runtime-based comparison of different MCDM methods has been carried out apart from the validation and robustness check. To be specific, this paper is the first of its kind that compares the MCDM methods of different categories for resource selection in MCC or any other distributed mobile computing systems.

1.6. Paper Organization

The rest of this paper is presented as follows. In Section 2, we collate some of the related work and discuss their findings. Section 3 discusses the objective weighting method (Entropy) and other MCDM methods used in the study and their respective algorithms. In Section 4, we furnish the research methodology, which includes the details of data collection, choosing the resource selection criteria, and different experimental cases (datasets) to be considered for the study. Section 5 presents the experimental details and results of the comparative analysis. Section 6 presents a critical analysis of the experimental findings and the rationality and practicability of this study. Finally, Section 7 concludes the paper while pointing out the limitations of this study and mentioning the future scopes and research prospects for improving this work. Table 2 lists the acronyms used in this paper and their full forms.

2. Related Work

MCDM techniques have been used for decision-making in several application domains for a long time [44,45]. They have been extensively used in engineering [46]. Table 3 lists some major application areas of MCDM along with respective references. However, this list is in no way comprehensive but only representative. To make the list short, we majorly considered the review or survey articles. In the following, we discuss some scholarly works in the context of our study.
Like web service selection [47,48], MCDM methods are also popularly used for cloud service selection [49,50,51]. Youssef [52] used a combination of TOPSIS and BWM to rank cloud service providers based on nine service evaluation criteria, including sustainability, response time, usability, interoperability, cost, maintainability, reliability, scalability, and security. Singla et al. [53] used Fuzzy AHP and Fuzzy TOPSIS to select optimal cloud services in a dynamic mobile cloud computing environment. They considered resource availability, privacy, capacity, speed, and cost as selection criteria.
MCDM methods are being used to improve the efficiency and effectiveness of job offloading in mobile cloud computing [54,55]. To save the energy of a mobile device, Ravi and Peddoju [56] used TOPSIS for selecting suitable service providers such as cloud, cloudlet, and peer mobile devices to offload the computation tasks. They considered the waiting time, the energy required for communication, the energy required for processing in mobile devices, and connection time with the resource as the selection criteria.
Mishra et al. [57] proposed an adaptive MCDM model for resource selection in fog computing, which can accommodate the new-entrant fog nodes without reranking all the alternatives. The proposed method is claimed to have less response time and is suitable for a dynamic and distributed environment.
To ensure the quality of the collected data in mobile crowd sensing applications, Gad-ElRab and Alsharkawy [58] used the SAW method for selecting the most efficient devices based on computation capabilities, available energy, sensors attached to the device, etc.
Nik et al. [59] used the TOPSIS method to select the resource with the best response time for asynchronous replicated systems in a utility-based computing environment. To achieve a shorter response time, they considered four QoS parameters (efficiency, freshness of data, reliability, and cost) as selection criteria.
MCDM methods have been used for resource selection in grid computing as well. Mohammadi et al. [60] used AHP and TOPSIS combinedly for grid resource ranking considering cost, security, location, processing speed, and round-trip time as criteria. Abdullah et al. [61] used the TOPSIS method to select resources for fair load balancing in a multi-level computing grid. For resource selection, they considered three criteria expected completion time, resource reliability, and the resource’s load. Kaur and Kadam [62] used MCDM methods for a two-phased resource selection in grid computing. They applied the SAW method to rank the best resources in the local or lower level and then used enriched PROMETHEE-II combined with AHP for a global resource selection or to select the best resources across all the top-ranked resources at each local level.
Several works are proposed for evaluation and selection of smartphones [63,64,65,66,67,68,69], but in all these works, smartphones were considered as consumer devices. Various aspects were considered for selection by matching with the consumers’ choice and interest. We could not find any work that applied MCDM for smartphone selection as a computing resource.
Triantaphyllou, in his book [70], extensively compared the popular MCDM methods such as WSM, WPS, TOPSIS, ELECTRE, and AHP (along with its variants). The methods were discussed based on real-life issues, both theoretically and empirically. A sensitivity analysis was performed on the considered methods, and the abnormalities with some of these methods were rigorously analyzed. Velasquez and Hester [71] performed a literature review of several MCDM methods, viz., MAUT, AHP, fuzzy set theory, case-based reasoning, DEA, SMART, goal programming, ELECTRE, PROMETHEE, SAW, and TOPSIS. This study aimed to analyze the advantages and disadvantages of the considered methods and examine their suitability in specific application scenarios.
Several other works attempted to present comparative studies of different MCDM methods with respect to different application areas. Table 4 presents a comprehensive list of such works. However, despite our best effort, we could not find any comparative analysis of MCDM methods for resource selection in a dynamic environment like MCC or any other related applications. From the table, it can also be observed that barring only a few works, none has conducted time complexity analysis. Furthermore, we found not a single paper that calculated the actual runtime of the MCDM algorithms. These unique contributions of our paper make it exclusive.

3. Research Background

This section briefly discusses the key methods considered for the comparative study and their corresponding computational algorithms.

3.1. MCDM Methods Considered for the Comparative Study

This section briefly describes five MCDM methods considered for the comparative analysis along with their computation algorithms. In this paper, we derived the preferential order of the alternatives based on the following aspects:
(a)
Separation from average solution (EDAS method).
(b)
The relative positioning of the alternatives with respect to the best one (ARAS method).
(c)
Utility-based classification and preferential ordering on the proportional scale (COPRAS method).
(d)
Approximation of the positions of the alternatives to the average solution area (MABAC method).
(e)
Compromise solution while trading of the effects of the criteria on the alternatives (MARKOS method).
We considered the widely used MCDM methods as a representation of each above-mentioned class. In Table 5, we present a comparative analysis of the merits and demerits of the considered MCDM methods. Since the calculation time is vital in our problem (resource selection in MCC) and subjective bias might affect the final solution, we avoided considering the pairwise comparison methods such as AHP, ANP, ELECTRE, MACBETH, REMBRANDT (multiplicative AHP), PAPRIKA, etc.

3.1.1. EDAS Method

EDAS is a recently developed distance-based algorithm that considers the average solution as a reference point [32]. The alternative with a higher favorable deviation, i.e., the positive distance from average (PDA), is preferred compared to non-favorable deviation, i.e., the negative distance from average (NDA). As a result, EDAS provides a reasonably robust solution, free from outlier effect and rank reversal problem, and decision-making fluctuations [165]. However, the EDAS method does not portray a favorable result. Therefore, this method is more suited in the case of risk aversion considerations. The procedural steps of EDAS are described below.
Step 1: Calculation of the average solution
The average solution is the midpoint for all alternatives in the solution space with respect to a particular criterion and is calculated by:
AV j = i = 1 m x ij m ;   j = 1 ,   2 ,   ,   n  
Step 2: Calculation of PDA and NDA
PDA and NDA are the dispersion measures for each possible solution with respect to the average point. An alternative with higher PDA and lower NDA is treated as better than the average one. The PDA and NDA matrices are defined as:
PDA = [PDAij]m×n
NDA = [NDAij]m×n
where:
PDA ij = { max ( 0 ,   ( x ij     AV j ) ) AV j ,   if   j t h   criterion   is   profit   type max ( 0 ,   ( AV j     x ij ) ) AV j ,   if   j t h   criterion   is   cost   type
and:
NDA ij = { max ( 0 ,   ( AV j     x ij ) ) AV j ,   if   j t h   criterion   is   profit   type max ( 0 ,   ( x ij     AV j ) ) AV j ,   if   j t h   criterion   is   cost   type
It can be inferred that if PDA > 0, then the corresponding NDA = 0, and if NDA > 0, then the PDA = 0 for an alternative with respect to a particular criterion.
Step 3: Determine the weighted sum of PDA and NDA for all alternatives
SP i = j = 1 n w j   PDA ij
SN i = j = 1 n w j   NDA ij
where, wj is the weight of jth criterion.
Step 4: Normalization of the values of SP and SN for all the alternatives
The normalization of linear form for SP and SN values are obtained by using the following expression:
NSP i = SP i max i ( SP i )
NSN i = 1 SN i max i ( SN i )
Step 5: Calculation of the appraisal score (AS) for all alternatives
Here the appraisal score denotes the performance score of the alternatives.
AS i = 1 2 ( NSP i +   NSN i )
where, 0 ≤ ASi ≤ 1. The alternative having the highest ASi is ranked first and so on.

3.1.2. ARAS Method

ARAS method uses the concept of utility values for comparing the alternatives. In this method, a relative scale (i.e., ratio) is used to compare the alternatives with respect to the optimal solution [35,166,167]. This method uses a simple additive approach while working under compromising situations effectively and with lesser computational complexities [168,169]. However, it is observed that ARAS works reasonably well only when the number of alternatives is limited [170]. The procedural steps of ARAS are described below.
Step 1: Formation of the decision matrix
X = [ x ij ] m × n
Step 2: Determination of the optimal value
The optimal value for jth criterion is given by:
x ij = { max i   x ij ,   for   profit   type min i   x ij ,   for   cost   type
Step 3: Formation of the normalized decision matrix
The criteria have different dimensions. Normalization is carried out to achieve dimensionless weighted performance values for all alternatives under the influences of the criteria. In this case, we follow a linear ratio approach for normalization. However, we consider the optimum point as the base level. Therefore, in the normalized decision matrix, we include the optimum value, and the order of the matrix is ( m + 1 ) × n . In the ARAS method, a two-stage normalization is followed for the cost type of criteria. The normalized decision matrix is given by:
R = [ r ij ] ( m + 1 ) × n
where:
r ij = { x ij i = 0 m x ij ,   for   profit   type   criteria 1 / x ij i = 0 m 1 / x ij ,   for   cost   type   criteria
If in case of cost type criteria x ij = 0 , we consider r ij = 0 .
Step 4: Derive the weighted normalized decision matrix
V = [ v ij ] ( m + 1 ) × n
where:
v ij =   r ij × w j
and i = 0 , m ¯ .
Step 5: Calculation of the optimality function value for each alternative
i = j = 1 n v ij
where, i = 0 , m ¯ .
Higher is the value of i , better is the alternative.
Step 6: Find out the priority order of the alternatives based on utility degree with respect to the ideal solution
i = i 0
where, i = 0 , m ¯ and i [ 0 , 1 ] .
Obviously, the bigger value of i is preferable. It is pretty certain that the optimality function i maintains a direct and proportional relationship with the performance values of the alternatives and weights of the criteria. Hence, the greater the value of i , more is the effectiveness of the corresponding solution. The degree of utility is essentially the usefulness of the corresponding alternative with respect to the optimal one.

3.1.3. MABAC Method

MABAC uses two areas: an upper approximation area (UAA) for favorable or ideal solutions and a lower approximation area (LAA) for non-favorable or anti-ideal solutions for performance-based classifications of the solutions. This method provides lesser computational complexities compared to the EDAS and ARAS methods. Further, since this method does not involve distance-based separation measures, it generates stable results [33]. MABAC compares the alternatives based on relative strength and weakness [171]. Because of its simplicity and usefulness, MABAC has been a widely popular method in various applications, for example, social media efficiency measurement [172], health tourism [173], supply chain performance assessment [159], portfolio selection [174], railway management [175], medical tourism site selection [176], and selection of hotels [177]. The procedural steps of MABAC are described below.
Step 1: Normalization of the criteria values
Here, a linear max-min type scheme is used. The usefulness of normalization is explained in the descriptions of the previous algorithms.
r i j = { ( x ij     x i ) ( x i +     x i ) ,   for   beneficial   criteria ( x ij     x i + ) ( x i     x i + ) ,   for   nonbenificial   criteria
where, x i + and x i are the maximum and minimum criteria values, respectively.
Step 2: Formulate the weighted normalization matrix (Y)
Elements of Y are given by:
y ij =   w j ( r ij + 1 )
where, w j is the criteria weight.
Step 3: Determination of the Border Approximation Area (BAA)
The elements of the BAA (T) are denoted as:
T = [ t j ] 1 × n
where:
t j = ( i = 1 m y ij ) 1 / m
where, m is the total number of alternatives and t j corresponds to each criterion.
Step 4: Calculation of the matrix Q related to the separation of the alternatives from BAA
Q = Y − T
A particular alternative a i is said to be belonging to the UAA (i.e., T + ) if q ij > 0 or LAA (i.e., T ) if q ij < 0 or BAA (i.e., T) if q ij = 0 . The alternative a i is considered to be the best among the others if more numbers of criteria pertaining to it possibly belong to T + .
Step 5: Ranking of the alternatives
It is done according to the final values of the criterion functions as given by:
S i = j = 1 n q ij   for   j = 1 , 2 , ,   n   and   i = 1 , 2 , ,   m
The higher the value is, more is the preference.

3.1.4. COPRAS Method

The COPRAS method calculates the utility values of the alternatives under the direct and proportional dependencies of the influencing criteria for carrying out preferential ranking [38,178,179]. The procedural steps for finding out the utility values of the alternatives using the COPRAS method are discussed in the following. The alternatives are ordered in descending order based on the obtained utility values.
Step 1: Construct the normalized decision matrix using the simple proportional approach
d ı ȷ ˜ = d ij i = 1 m d ij
where, d ij is the performance value of the ith alternative with respect to jth criterion (i = 1, 2, …, m; j = 1, 2, …, n)
Step 2: Calculation of the sums of the weighted normalized values for optimization in ideal and anti-ideal effects
The ideal and anti-ideal effects are calculated as:
G + i = j = 1 k d ı ȷ ˜ .   ε j
G i = j = k + 1 n d ı ȷ ˜ .   ε j
where, k is the number of maximizing (i.e., profit type) criteria and ε j is the significance of the jth criterion.
In case of G + i , all d ı ȷ ˜ values are corresponding to the beneficial or profit type criteria, and for G i , we take the performance values of the alternatives related to cost type criteria.
Step 3: Calculation of the relative weights of the alternatives
The relative weight for any alternative (ith) is given as:
Ω i = G + i + min i G i i = 1 m G i G i i = 1 m min i G i G i G + i + i = 1 m G i G i i = 1 m ( 1 G i )
The Ω i value corresponding to the ith alternative signifies the degree of satisfaction of that with respect to the given conditions. The greater is the value of Ω i better is the relative performance of the concerned alternative, and hence, higher is the position. Therefore, the most rational and efficient DMU should have Ω i   max i.e., the optimum value. The relative utility of a particular DMU or alternative is determined by comparing the Ω i value of any DMU with respect to the Ω i   max value, corresponding to the most effective one.
The utility for each alternative is given by:
U i = Ω i Ω i   max × 100 %
Needless to say, the U i value for the most preferred choice is 100%.

3.1.5. MARCOS Method

MARCOS belongs to a strand of MCDM algorithms that derives solutions under compromise situations. However, unlike the previous versions, MARCOS starts with including ideal and anti-ideal solutions in the fundamental decision matrix at the very beginning. Likewise, COPRAS also finds out the utility values. However, here the decision-maker can make a trade-off among the ideal and anti-ideal solutions to arrive at the utility values of the alternatives. The MARCOS method is also capable of handling a large set of alternatives and criteria [43,180,181]. The procedural steps of MARCOS are described below.
Step 1: Formation of the extended decision matrix (D*) by including the anti-ideal solution ( S ) values in the first row and the ideal solution ( S + ) values in the last row
S and S + are defined by:
S = { min i   x ij ,   when   j profit   type max i   x ij ,   when   j cost   type
S + = { max i   x ij ,   when   j profit   type min i   x ij ,   when   j cost   type
The anti-ideal solution represents the worst choice, whereas the ideal solution is the reference point that shows the best possible characteristics given the set of constraints, i.e., criteria.
Step 2: Normalization of D*
The normalized values are given by:
r ij = { x S + x ij ,   when   j cost   type x ij x s + ,   when   j profit   type
Since it is preferred to set apart from the anti-ideal reference point, in MARCOS, the normalization is carried out using a linear ratio approach with respect to the anti-ideal solution.
Step 3: Formation of weighted D*
After normalization, the weighted normalized matrix with elements v ij is formulated by multiplying the normalized value of each alternative with the corresponding weight of the criteria, as given below:
v ij = w j r ij
Step 4: Calculation of utility degrees of the alternatives for S + and S
The utility degree of a particular alternative with respect to given conditions represents its relative attractiveness of the same. The utility degrees are calculated as follows:
K i = γ i γ s
K i + = γ i γ s +
where:
γ i = j = 1 n v ij
Step 5: Calculation of values of utility functions for S + and S
The utility function resembles the trade-off that the observed or considered alternatives make vis-à-vis the ideal and anti-ideal reference points, and are given by:
f ( K i ) = K i + K i + +   K i
f ( K i + ) = K i K i + +   K i
The decision is made related to the selection of a particular alternative is based on utility functional values. The utility function exhibits the relative position of the concerned alternative with respect to the reference points. The best alternative is closest to the ideal reference and, subsequently, distant from the anti-ideal one compared to other available choices.
Step 6: Calculation of the utility function values for the alternatives
The utility function value for ith alternative is calculated by:
f ( K i ) = K i + +   K i 1 + 1   f ( K i + ) f ( K i + ) + 1   f ( K i ) f ( K i )  
The alternative having the highest utility function value is ranked first over the others.

3.2. Entropy Method for Criteria Weight Calculation

Each selection criterion carries some weight. The weights define the importance of the criteria in the decision-making. To determine the criteria weights, we applied the most popularly used entropy method. The entropy method works on objective information following the concept of the probabilistic information theory [182]. The objective weighting approach can mitigate the man-made instabilities in the subjective weighting approach and gives more realistic results [183]. The entropy method shows its efficacy in dealing with imprecise information and dispersions while offsetting the subjective bias [184,185]. Extant literature shows a colossal number of applications of the Entropy method for determining criteria weights in various situations (for example [174,186,187,188,189,190]). The steps of the entropy method are given below:
Suppose, X = [ x ij ] m × n represents the decision matrix where m is the number of alternatives and n is the number of criteria.
Step 1: Normalization of the decision matrix
Normalization is carried out to bring the performance values of all alternatives subject to different criteria to a common unitless form having scale values ϵ(0,1). Here we follow the linear normalization scheme.
Entropy value signifies the level of disorder. In the case of criteria weight determination, a criterion with a higher Entropy value indicates that that particular criterion contains more information.
The normalization matrix is represented as ( R ) m × n where the elements r ij are given by:
r i j = { ( x i j x j m i n )   ( x j m a x x j m i n ) ,   for   profit   type   criteria ( x j m a x x i j )   ( x j m a x x j m i n ) ,   for   cost   type   criteria
Step 2: Calculation of Entropy values
The Entropy value for ith alternative for jth criterion is given by:
H j = k i = 1 m f ij ln ( f ij )
where, k is a constant value and is defined by:
k = 1 / ln ( m )
and:
f ij = r ij i = 1 m r ij
If f ij = 0 then,
f ij ln ( f ij ) = 0
Step 3: Calculation of criteria weight
The weight for each criterion is given by:
w j = 1 H j n j = 1 n H j
Here, the higher the value of w j is, more is the information contained in the jth criterion.

4. Research Methodology

This section discusses the research framework used in this paper and provides the computational steps of the MCDM algorithms applied for carrying out the comparative analysis in a dynamic environment. Figure 2 depicts the steps followed in this research work.

4.1. Resource Selection Criteria

For the experimental purpose, in this paper, we considered a generalized scenario for the resource requirement of the MCC computing jobs. Generally, an SMD’s computing capability is determined by typical resource parameters such as CPU and GPU power, RAM, battery, signal strength (for data transfers), etc. Here, we considered thirteen criteria for SMD selection, as shown in Table 6. Out of these, eight are profit criteria, i.e., their maximized values would be ideal for selection, whereas five are cost criteria, i.e., their minimized values should be ideal.
However, depending on specific applications and specific job types, the criteria and their weights would vary. For example, a CPU-bound job may not use GPU cores, while some highly computing-intensive jobs (such as image and video analysis, complex scientific calculations, etc.) would use GPU more than the CPU. Similarly, the RAM size would be a decisive factor for a data-intensive job that might not be so important for a CPU-intensive job. Here, we chose the criteria that would, in general and overall, be considered for selecting an SMD as a computing resource.

4.2. Data Collection

To collect the SMD data to be used in the comparative analysis, we considered a local MCC scenario at the Data Engineering Lab of the Department of Computer Science & Engineering at National Institute of Technology, Durgapur. We collected data from the users’ SMD connected to the Wi-Fi access point deployed at this lab, which is generally accessed by the institute’s research scholars, the project students, faculty members, and the technical staff. We developed a logger program using the Python 3.6 environment. The Python script constantly monitored the wireless network interfaces. Whenever an SMD gets connected to the access point, the logger program collects the required data and stores them in a database within the MCC coordinator. All the devices connected to the access point were identified (UID) using their MAC addresses. The overall MCC setup and data collection scenario is shown in Figure 3.
In another experiment for local MCC [191], we logged the SMD information for nearly eight months of several users (whoever connected to the access point during this period). Among them, we picked the users who were more consistent with high presence frequency and less sparsity. For this study, we considered such 50 SMDs, selected randomly. We collected various information related to the users and their SMDs. However, in this paper, we used only that information required for this experiment. To be specific, here, we considered a total of thirteen resource parameters that are important in the decision-making process for selecting an SMD as a suitable resource in MCC, as shown in Table 6. It can be seen from the table that some resource parameters are fixed, i.e., they would not change their values in their lifetime (e.g., C1, C2, C3, C4, C6, and C13), while some parameters’ values are changed dynamically (e.g., C5, C7, C8, C9, C10, C11, and C12). We considered some instantaneous values of all the parameters and used the same for all experimental illustrations for the experimental purpose.

4.3. Experiment Cases

As in this study, we wanted to assess the effect of the number of criteria and alternatives in the selection outcome and computational complexity; we considered different variations of the selection criteria and alternatives for comparison. Accordingly, we generated four case scenarios, as discussed in the following subsections. Each case has a different number of alternatives (SMDs) and criteria. The reason behind choosing four datasets of different sizes is to assess the performance of the MCDM methods under different MCC scenarios.

4.3.1. Case 1: Full List of Alternatives and Full Criteria Set

This scenario considers the full list of alternatives under comparison (i.e., 50) subject to the influence of full criteria set consisting of 13 different criteria, as shown in Table 6. Accordingly, the decision matrix (50 × 13) is given in Table 7.

4.3.2. Case 2: Lesser Number of Alternatives and Full Criteria Set

In this minimized dataset, we assume that only ten SMDs available for crowd computing (typically in a small-scale MCC). In this case, we shortened the number of alternatives. Here, the decision-maker would be able to compare the MCDM methods on a limited number of alternatives for the full list of criteria. For simplicity, we selected one smartphone model out of each group of five starting from the beginning, i.e., M5, M10, M15, and so on. The decision matrix (10 × 13) is given in Table 8.

4.3.3. Case 3: Total Number of Alternatives and a Smaller Number of Criteria

In a situation, depending on the MCC application requirement, the full criteria set may not need to be considered. For these cases, only a small number of crucial criteria may be defined. To represent such a scenario, in this case, we considered a minimized dataset by eliminated some criteria from the original dataset. We assumed that some criteria (e.g., CPU and battery temperature and signal strength) could be kept out of the selection matrix and, if required, could be set as threshold criteria straightforwardly. For example, suppose the threshold for temperature is set at 40 °C. In that case, all the SMDs having a temperature more than this would be filtered out and would not be considered for the selection, irrespective of other resource specifications. We also removed information of GPU, assuming that the tasks are CPU bound only and they do not require to exploit the power of GPU, i.e., the jobs are sequential and not parallel. It can also be vice versa, i.e., we could consider GPU where the MCC job involves mostly parallel processing. Table 9 shows the criteria considered, and in Table 10, the decision matrix (50 × 6) is presented.

4.3.4. Case 4: Minimized Number of Alternatives and Criteria

In this case, we considered the combination of a minimized set of alternatives and criteria. This scenario considers a limited number of choices and the influence of a limited number of criteria. We considered the alternatives as selected in Case 2 and the criteria as listed in Table 9. Hence, in this case, our decision matrix is of dimension 10 × 6, as shown in Table 11.

5. Experiment, Results, and Comparative Analysis

In this section, we present the details of the experiment for the comparative study, including the results and critical discussion. The experiment focuses on the comparative ranking for the SMD selection using five distinct MCDM methods and to find their time complexities under different scenarios by varying the criteria and/or alternative sets.

5.1. Experiment

We applied the entropy method and the five MCDM methods (i.e., EDAS, ARAS, MABAC, COPRAS, and MARCOS) on four datasets, as discussed in Section 4.3. The algorithms were implemented using a spreadsheet (MS Excel) as well as through hand-coded programming (using Java). However, for ranking and sensitivity analysis, we used the spreadsheet calculation, and to estimate the runtime, we considered the programming outturn. The details of the programmatical implementation are discussed in Section 5.4. The aggregate rankings of the SMDs were derived from each MCDM method for each dataset. We checked the consistencies among the results of the individual MCDM methods and the final aggregate ranks. We also compared the robustness and stability in the performance of the MCDM methods applied in this paper. Finally, the actual runtimes of each method under different scenarios were calculated.

5.2. Results

In this section, we report the details of the experimental results of SMD rankings using the considered MCDM methods, obtained through the spreadsheet calculation.
Table 12 shows the criteria weights calculated for Case 1 using the Entropy method where w j = 1 and Cj represents the criteria, where j = 1, 2, 3, …, 13. It is seen that the weights of the criteria are reasonably distributed. However, based on the values of the decision matrix, the Entropy method calculates higher weights (>10%) for C1, C2, and C4 while assigning the least weights to C11 and C12.
We used these criteria weights to rank the alternatives based on the decision matrix of Table 7, applying the five MCDM methods considered in this paper. Table 13, Table 14, Table 15, Table 16 and Table 17 present the rankings of the alternatives based on the final score values as derived by using the five MCDM algorithms. From Table 13, we observe that considering the average solution point as the reference, M19, M14, M36, M41, and M7 are the top performers while proportional assessment methods such as ARAS and COPRAS respectively yield M36, M14, M26, M19, M31 and M19, M14, M41, M36, M6 as better performers (see Table 14 and Table 16). It is observed that the top-performing DMUs show reasonable consistency. However, Table 15 and Table 17 show that the relative ranking results derived by MABAC and MARCOS are weekly consistent with previous rankings.
To find out the aggregate ranking, we used the final score values of the alternatives as obtained using different algorithms and applied the SAW method [192] for objective evaluation as adopted in [159]. Table 18 exhibits the relative positioning of the alternatives by different MCDM methods and their aggregate ranks derived by using SAW. In this context, Table 19 shows the findings of the rank correlation tests among the results obtained by using different methods and the final rank obtained by SAW. For this, we derived the following two correlation coefficients:
Kendall’s τ: Let, {(a1, b1), (a2, b2), …, (an, bn)} is a set of observations for two random variables A and B where all ai and bi (i = 1, 2, …, n) values are unique. Then, Kendall’s τ is calculated as follows:
τ = ( N o .   o f   c o n c o r d e n t   p a i r s ) ( N o .   o f   d i s c o r d e n t   p a i r s ) ( n 2 )
Spearman’s ρ: Any pair ( a i ,   b i ) and ( a j ,   b j ) where i < j is said to be concordant if either both a i > a j and b i > b j or a i < a j and b i < b j hold good. The Spearman’s ρ is calculated as follows:
ρ = 1 6   d i 2 n   ( n 2 1 )
here, d i is the difference between two ranks of each observation, and n is the number of observations.
The aggregated final rank in terms of consistency is: MABAC > COPRAS > EDAS > ARAS > MARCOS. Similarly, we derived the ranking of alternatives subject to the influence of the criteria for the other cases (Case 2 to 4). Table 20, Table 21 and Table 22 show the criteria weights for Case 2–4 as derived from the performance values of the alternatives subject to influences of the criteria involved. In Case 2, we used the full set of criteria but a reduced number of alternatives, while in Case 3, we used the full set of alternatives subject to a reduced set of criteria. In Case 4, we considered a reduced set for both alternatives and criteria. It may be noted from Table 20 that C1, C2, and C13 obtain higher weights (more than 10%) while C4 and C8 are holding the least weight. It suggests that when we reduce the number of alternatives, there is a change in the derived criteria weights (see Table 12 and Table 20). The same phenomenon is observed when we compared the derived criteria weights for the reduced set of criteria (for Cases 3 and 4, see Table 21 and Table 22).
Table 23, Table 24 and Table 25 show the alternatives’ comparative ranking under Case 2–4, respectively. After obtaining the ranking of the alternatives by various algorithms, we found the aggregate rank by using the SAW method based on the appraisal scores.
Now, for comparative analysis of various MCDM methods, it is important to see the consistency of their results with the final preferential order. Hence, we performed a non-parametric rank correlation test. Table 19 for Case 1 and Table 26, Table 27 and Table 28 for Case 2–4 exhibit the results of correlation tests. From Table 26, we find that COPRAS > EDAS > ARAS > MABAC (MARCOS shows non-consistency with the final ranking). Table 27 indicates that EDAS > ARAS > COPRAS > MABAC > MARCOS, while from Table 28, we trace that COPRAS > ARAS > EDAS > MABAC > MARCOS in terms of consistency of their individual results with final ranking order as obtained by using SAW.

5.3. Sensitivity Analysis

Some of the essential requirements for MCDM-based analysis are the rationality, stability, and reliability of the rankings [193]. There are several variations in the given conditions, for instance, change in the weights of the criteria, MCDM algorithms and normalization methods, and deletion/inclusion of the alternatives that often lead to instability of the results [171,194,195]. Sensitivity analysis is conducted to experimentally check the robustness of the results obtained using MCDM based analysis [196,197]. A particular MCDM method shows stability in the result if it can withstand variations in the given conditions, such as fluctuations in the criteria weights.
For the sensitivity analysis, we used the scheme followed in [198], which simulates different experimental scenarios by interchanging criteria weights. Table 29, Table 30, Table 31 and Table 32 present the experimentations vis-à-vis the four cases used in this study. Here, the numbers in italics denote that the cell values of that particular column interchange their weights [199,200,201], in each experiment. In this scheme, we attempt to interchange weights of optimum and sub-optimum criteria, beneficial and cost type of criteria to simulate various possible scenarios for examining the stability of the ranking results obtained by various MCDM methods.
Figure 4 depicts the comparative variations in the rankings of the alternatives as derived by using five MCDM algorithms under different experimental set up for Case 1. We observe that all five considered MCDM methods provide reasonable stability in the solution while COPRAS and ARAS perform comparatively better. Table 33 highlights the correlation of the actual ranking with those obtained by changing the criteria weights (see Table 29). In the same way, we carried out the sensitivity analysis for all MCDM methods for Cases 2 to 4. Table 34, Table 35 and Table 36 show the results of the correlation test as we do for Case 1.

5.4. Time Complexity Analysis

This section reports the time complexity analysis and the runtimes of the five MCDM methods considered in this study, as summarized in Table 37. All the methods have a worst-case time complexity of O(mn), where m is the number of alternatives and n is the number of considered criteria. However, EDAS, MABAC, and COPRAS exhibit Ω(m + n) as the best-case time complexity if the decision matrix is already prepared. But if the matrix is constructed in runtime, the best-case time complexity for these methods also would be Ω(mn).
Depending on the MCC application and architecture, the MCC coordinator where the SMD selection program would run might be a computer or an SMD. That is why, to check the performance of the MCDM methods, we checked the runtime of each of them by running on a laptop and a smartphone.
To run the MCDM algorithms on the laptop, we used Java (version 16) as the programming language and MS Excel (version 2019) as the database. The programs were executed on a laptop with AMD Ryzen 3 dual-core CPU (2.6 GHz, 64 bit) and 4 GB of RAM, operating on Windows 10 (64-bit). To run the programs on a smartphone, we designed an app that could accommodate and run Java program scripts; and in this case, we used a text file to store the decision matrix. The programs were executed on an SoC with 1.95 GHz Snapdragon 439 (12 nm), octa-core (4 × 1.95 GHz Cortex-A53 and 4 × 1.45 GHz Cortex A53) CPU, and Adreno 505 GPU, with 3 GB of RAM, operating on Android 11.
The MCDM module may get the decision matrix either from the secondary storage or primary memory. We generally might store the database on the secondary storage when we need to maintain the log for future analysis and prediction. But, updating the SMD resource values in the decision matrix on the secondary storage and retrieving them frequently for decision making involves considerable overhead. Alternatively, the decision matrix could be updated dynamically where the SMD resource values come directly to the coordinator’s memory. Compared to secondary storage, accessing memory takes negligible time.
Since in MCC, the SMDs are mobile, the available SMDs (alternatives) continuously change. Existing SMDs may leave, and new SMDs may join the network randomly. Also, the status of the variable resources (e.g., C5, C7, C8, C9, C10, C11) of each SMD varies time-to-time depending on its usage. In fact, in a typical centralized MCC, a data logging program always runs in the background to track the values of these recourses. This leads to change the decision matrix continuously. And based on the changed decision matrix, the SMD ranking also changes. It is desirable to store the decision matrix in the memory in such a dynamic scenario as long as resource selection is required.
Therefore, to have a comparative analysis in this aspect, we calculated the runtime considering both the scenarios: (a) when the dataset was fetched from the secondary storage and (b) when it was preloaded on RAM. The execution time was calculated using a timer (a Java function) in the program. The timer counted the time from data fetching (either from RAM or storage) to completion of the program execution. We executed each algorithm twenty times and took the average runtime. To eliminate the outliers, we discarded the particular execution instances that were abnormally protracted.
From Table 37, it can be observed that the average runtimes of the MCDM programs, when they are executed on a laptop, are significantly higher when the decision matrix is in the secondary storage as compared to when it is in memory. However, when these programs are executed on the smartphone, this difference is not that high. This is because the typical storage used in smartphones is much faster than the hard disks of laptops. Another point is worth mentioning that we used text files as a database to execute the programs on the smartphone in our study. If it were other traditional database applications, the time taken to fetch the dataset from the phone storage would probably be much higher. In that case, the difference between the dataset in memory and storage would be significantly larger.
In our comparative analysis, we executed each algorithm ten times for each case. The average runtimes of ten executions were noted. The runtime of any program varies depending on several internal and external factors. That is why we took the average of ten execution instances. However, it is observed that the runtime variations are much higher on a laptop than on a smartphone. This is because the number of background processes typically run on laptops is significantly higher than on smartphones. Also, the resource scheduling in a laptop is more complex than in a smartphone. Nevertheless, the variations in each execution could be more neutralized if the number of considered execution instances is increased.

6. Discussion

In this section, we discuss the experimental findings and our observations. We also present a critical discussion on the judiciousness and practicability of this work and the findings.

6.1. Findings and Observations

In this section, we discuss the observations on the findings obtained through data analysis. As already mentioned, we have four conditions:
  • Condition 1: Full set (Case 1: complete set of 13 criteria and 50 alternatives)
  • Condition 2: Reduction in the number of alternatives keeping the criteria set unaltered (Case 2: reduced set of 10 alternatives and complete set of 13 criteria)
  • Condition 3: Variation in the criteria set (Case 3: reduced set of 6 criteria) keeping the alternative set the same (i.e., 50)
  • Condition 4: Variations in both alternative and criteria sets (Case 4: reduced set of 10 alternatives and 6 criteria).
For all conditions, we noticed some variations in the relative ranking orders. By further introspecting the results obtained from different methods and their association with the final ranking (obtained by using SAW), we found that for Case 1, MABAC and COPRAS are more consistent. For Case 2, COPRAS and EDAS outperformed others in terms of consistency with the final ranking. For Case 3, we observed that EDAS and ARAS showed better consistency while COPRAS performed reasonably well. For Case 4, we found that COPRAS and ARAS showed relatively better consistency with the final ranking. Therefore, the first level inference advocates in favor of COPRAS for all conditions under consideration.
Moving further, we checked for stability in the results. We performed a sensitivity analysis for all methods under all conditions, as demonstrated in Section 5.3. Here also, we noticed mixed performance. However, COPRAS shows reasonably stable results under all conditions given the variations in the criteria weights except Case 4.
Therefore, it may be concluded that given our problem statement and experimental setup, COPRAS has performed comparatively well under all case scenarios, while ARAS being its nearest competitor in this aspect. For both methods, the procedural steps are less in number, simple ratio-based or proportional approach is followed, i.e., no need to identify anti-ideal and ideal solutions or calculate distance. Therefore, the result does not show any aberrations. It may, however, be interesting to examining the performance of the algorithms when criteria weights are predefined, i.e., not depending on the decision matrix.
We further investigated the time complexities of the MCDM algorithms used in this paper to find out the most time-efficient one. All the considered MCDM methods perform equally in this aspect, though the best-case time complexity for EDAS, MABAC, and COPRAS is better than others. Figure 5, Figure 6, Figure 7 and Figure 8 graphically present the case-wise comparisons of the runtimes of each MCDM method for all the scenarios. Our experiment observed that the COPRAS method exhibits the most petite runtime for each dataset (cases) for all the considered scenarios, i.e., whether the dataset is in the secondary storage or memory or the program is run on a laptop or smartphone. Specifically, considering the average runtime for all the cases and scenarios, the ranking of the MCDM methods as per their runtime (RT) is: RTCOPRAS < RTMARCOS < RTARAS < RTMABAC < RTEDAS.
However, this rank does not hold true for all the executions in each case. For example, from Figure 6, it can be noted that ARAS and MABAC took less time to execute in Case 1. In practice, Case 3 probably would be more common than other cases for a typical MCC application, i.e., there would be few numbers of SMDs available as computing resources and the application demanding a certain number of selection criteria. For this case, COPRAS took 0.05597 milliseconds on average if it runs on a laptop while the dataset resides in the memory and 0.32844 milliseconds for a smartphone. For a dynamic resource selection in MCC, this time requirement is tolerable. However, when the dataset is on the secondary storage, the runtime increases exponentially in the case of the laptop but not a smartphone.
The runtime for both the MCDM method and Entropy calculation should be considered to get the effective runtime for the ranking process. Like the MCDM methods, for Entropy calculation also, when the dataset is on the secondary storage, the runtime increases exponentially in the case of the laptop but not a smartphone, as shown in Figure 9. Therefore, we can postulate that if the MCC coordinator is a laptop or desktop computer, the dataset needs to be stored in the memory before resource selection.
Considering the above discussions, it can be deduced that the COPRAS method is the most suitable for resource selection in MCC in terms of correctness, robustness, and computational (time) complexity.

6.2. Rationality and Practicability

In this section, we present a critical discussion on the rationality and practicability of this study.

6.2.1. Assertion

In the previous section, we conclusively observed that for resource selection in MCC, the COPRAS method is the most favorable in all respect. However, it should not be misinterpreted that the COPRAS method is the ideal solution for resource selection in MCC. In fact, optimized resource selection in a dynamic environment like MCC is an NP-hard problem. Hence, practically no solution can be claimed as optimal. We only assert that we found that COPRAS scales favorably in all aspects compared to other methods. But this does not mean that COPRAS is the ideal solution. There is always scope to explore further for a more suitable multi-criteria resource selection algorithm that would be more computing and time-efficient.
Moreover, it should be noted that the effectiveness of an MCDM solution depends on the particular problem and the data. In real implementations of MCC, the actual SMD data would certainly change, be it for different instances of the same MCC system or in different MCC systems. Because due to the dynamic nature of a typical MCC, the SMDs are not fixed. Even if the SMDs are fixed in an MCC for a certain period, their resource values will vary depending on the applications running on them and their users’ device usage behavior. Moreover, since the need for computing resources varies according to application requirements, the selection criteria and weights also differ accordingly. In these cases, the datasets would vary from those we used in our experiment. But the problem behavior and data types would be the same for all MCC applications and throughout their different execution instances. Hence, a solution found suitable for the given dataset would be applicable to any similar dataset for MCC. Even if the size of the datasets varies in different MCC, the finding of this study will hold true because we found that COPRAS performed comparatively better in all four datasets of different sizes considered in the experiment.

6.2.2. Application

The resource selection module is generally incorporated in the resource manager module of a typical distributed system. And the resource manager module generally is part of the middleware of a 3-tier system. Therefore, in the actual designing and implementation of an MCC system, the MCDM-based resource selection algorithm would be integrated into the middleware of MCC. This resource selection algorithm should generate a ranked list of the available SMDs based on their resources. The MCC job scheduler would dispatch the MCC jobs to the top-ranked SMDs from the list. This would ensure a better turnaround time and throughput and, in turn, better QoS of the MCC.

6.2.3. Implications

The findings of this paper would allow the MCC system designers and developers to adopt the right resource selection method for their MCC based on its scale and also on the preference and priority of the resource types. This would also contribute to managerial decision-making for implementing organizational MCC. As the study simulates different scenarios and compares the available options, it would be a likely reference for the decision-makers to choose the right MCDM method for resource selection and consider the appropriate size of the employed MCC and decide on the right number of selection criteria.
Furthermore, the pronouncements of this paper shall allow the researchers to choose a suitable MCDM method with reasonably higher accuracy and lesser run time complexity to solve real-life problems similar to the one discussed in this paper. Not only the researchers in the area of MCC and other allied fields (e.g., mobile grid computing, mobile cloud computing, and other related forms of distributed computing), this study would be of interest also to the people from the MCDM field who might find it motivating to nurture this problem domain and come up with some novel or improved methods that would be more suitable to address the associated resource dynamicity.

7. Conclusions, Limitations, and Further Research Scope

In this concluding section, we recap and summarize the presented problem, experimental work, and findings. We also point out the shortfalls of this study and identify the future scopes and research prospects to expand this work.

7.1. Summary

In mobile crowd computing (MCC), the computing capabilities of smart mobile devices (SMDs) are exploited to execute resource-intensive jobs. For better quality of service, selecting the most capable SMDs is essential. Since the selection is made based on several diverse SMD resources, the SMD selection problem can be described as multicriteria decision-making (MCDM) problem.
In this paper, we performed a comparative assessment of different MCDM methods (EDAS, ARAS, MABAC, MARCOS, and COPRAS) to rank the SMDs based on their resource parameters, among a number of available SMDs, for being considered as computing resources in MCC. The assessment was done in terms of ranking robustness and the execution time of the MCDM methods. Considering the dynamic nature of MCC, where the resource selection is supposed to be on-the-fly, the selection process needs to be as less time-consuming as possible. For selection criteria, we considered the fixed (e.g., CPU and GPU power, RAM and battery capacity, etc.) and the variable (e.g., current CPU and GPU load, available RAM, battery remaining, etc.) resource parameters.
We used the final score values of the alternatives as obtained by using different algorithms and applied the SAW method for arriving at the aggregate ranking of the alternatives. We also carried out a comparison of the ranking performance of the MCDM methods used in this study. We investigated their consistency with respect to the aggregate ranking and their stability through sensitivity analysis.
We calculated the time complexities of all the methods. We also assessed the actual runtime of all the methods by executing them on a Windows-based laptop and an Android-based smartphone. To assess the effect of the size of the dataset, we executed the MCDM methods with four datasets of different sizes. To have datasets of varied sizes, we varied the number of selection criteria and alternatives (SMDs) separately. For each dataset, we executed the programs considering two scenarios, when the dataset resides in the primary memory and when it is fetched from secondary memory.

7.2. Observation

It is observed that in terms of correctness, consistency, and robustness, the COPRAS method exhibits better performance under all case scenarios. As per time complexity, all the five MCDM methods are equal, i.e., O(mn), where m × n is the decision matrix (m is the number of SMDs and n is the number of selection criteria). However, EDAS, MABAC, and COPRAS have a better best-case (Ω(m + n)) complexity. Overall, COPRAS has been shown to consume the least runtime for each execution case, i.e., for all four matrix sizes, on the laptop as well as on the smartphone.

7.3. Conclusive Statement

The COPRAS method is found to be better than other MCDM methods (EDAS, ARAS, MABAC, and MARCOS) for all test parameters and in all test scenarios. Hence, it can be concluded that among the existing MCDM methods, COPRAS would be the most suitable choice for resource ranking to select the best resource in MCC and other similar problem setups.

7.4. Limitations and Improvement Scopes

We used the entropy method to calculate the criteria weights. It is an objective approach in which the criteria weights depend on the decision matrix values. In a dynamic environment like MCC, the SMDs may join and leave the network frequently, and the status of their variable resources also changes as per device usage. This results in frequent alteration in the decision matrix. This implies that the entropy calculation should be done every time for criteria weight determination, which is a real overhead.
Here, the criteria weights were calculated dynamically based on the present resource status of the SMDs, expressed in metric terms. We did not take into account the criteria preferences in line with the resource specification preference of the MCC applications. As the dataset gets changed based on varying criteria and alternative sets, the criteria weights also get changed according to the performance values of the alternatives. Hence, this approach might not provide the optimal resource ranking as per the real applicational requirements. So, our future study can explore the possibility of defining the criteria weights based on the required resource specifications of a typical MCC user or application.
Furthermore, we opted for the most straightforward normalization technique, i.e., linear normalization. But there are various normalization techniques in practice that could be used. Therefore, there is a scope to study the effect of different normalization techniques in the ranking and execution performance of the MCDM methods.

7.5. Open Research Prospects

Since the MCC environment is really dynamic in nature, i.e., not only the SMDs but also the status of the resource parameters of each existing SMDs change frequently. Therefore, the resource selection not only needs to be optimal but also to be adaptive in an unpredictable MCC environment. This opens up scope for exploring an adaptive MCDM method that would well acclimate the frequent variation in the alternatives and their values (i.e., the data matrix). Ideally, whenever there is a change in the alternative list or in the performance score, the MCDM method should be able to reflect this change in the overall ranking without reranking the whole list. This would not only minimize the SMD selection and decision-making time but also truly reflect the dynamic and scalable nature of MCC, which is not in the case of the traditional MCDM methods. Also, there is a requirement for further research on realizing an MCDM method that would be suitable for a distributed resource selection in an inter-MCC system.

Author Contributions

Conceptualization, P.K.D.P.; methodology, S.B. and S.P.; software, S.P.; validation, P.K.D.P., S.B. and S.P.; formal analysis, P.K.D.P., S.B. and S.P.; investigation, P.K.D.P., S.B. and S.P.; data curation, P.K.D.P.; writing—original draft preparation, P.K.D.P. and S.B.; writing—review and editing, P.K.D.P., S.B., S.P., D.M. and P.C.; supervision, D.M. and P.C.; funding acquisition, D.M. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by the German Research Foundation and the Open Access Publication Fund of Technische Universität Berlin.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Acknowledgments

The authors wish to acknowledge the support received from the German Research Foundation and the Technische Universität Berlin.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Falaki, H.; Mahajan, R.; Kandula, S.; Lymberopoulos, D.; Govindan, R.; Estrin, D. Diversity in smartphone usage. In Proceedings of the 8th International Conference on Mobile Systems, Applications, and Services (MobiSys 2010), San Francisco, CA, USA, 15–18 June 2010. [Google Scholar]
  2. Wagner, D.T.; Rice, A.; Beresford, A.R. Device Analyzer: Understanding smartphone usage. In Mobile and Ubiquitous Systems: Computing Networking and Services; Springer International Publishing: Berlin/Heidelberg, Germany, 2014; Volume 131, pp. 195–208. [Google Scholar]
  3. Wurmser, Y. US Time Spent with Mobile 2019. 30 May 2019. Available online: https://www.emarketer.com/content/us-time-spent-with-mobile-2019 (accessed on 9 April 2021).
  4. Loke, S.W.; Napier, K.; Alali, A.; Fernando, N.; Rahayu, W. Mobile Computations with Surrounding Devices: Proximity Sensing and MultiLayered Work Stealing. ACM Trans. Embed. Comput. Syst. 2015, 14, 1–25. [Google Scholar] [CrossRef]
  5. Mtibaa, K.; Harras, A.; Habak, K.; Ammar, M.; Zegura, E.W. Towards Mobile Opportunistic Computing. In Proceedings of the IEEE 8th International Conference on Cloud Computing, New York, NY, USA, 27 June–2 July 2015. [Google Scholar]
  6. Lavoie, E.; Hendren, L. Personal volunteer computing. In Proceedings of the 16th ACM International Conference on Computing Frontiers (CF ‘19), Alghero, Italy, 30 April–2 May 2019. [Google Scholar]
  7. Fernando, N.; Loke, S.W.; Rahayu, W. Mobile Crowd Computing with Work Stealing. In Proceedings of the 15th International Conference on Network-Based Information Systems, Melbourne, Australia, 26–28 September 2012. [Google Scholar]
  8. Pramanik, P.K.D.; Choudhury, P.; Saha, A. Economical Supercomputing thru Smartphone Crowd Computing: An Assessment of Opportunities, Benefits, Deterrents, and Applications from India’s Perspective. In Proceedings of the 4th International Conference on Advanced Computing and Communication Systems (ICACCS—2017), Coimbatore, India, 6–7 January 2017. [Google Scholar]
  9. Pramanik, P.K.D.; Pal, S.; Choudhury, P. Smartphone Crowd Computing: A Rational Approach for Sustainable Computing by Curbing the Environmental Externalities of the Growing Computing Demands. In Emerging Trends in Disruptive Technology Management; Das, R., Banerjee, M., De, S., Eds.; Chapman and Hall/CRC: Boca Raton, FL, USA, 2019. [Google Scholar]
  10. Pramanik, P.K.D.; Pal, S.; Choudhury, P. Green and Sustainable High-Performance Computing with Smartphone Crowd Computing: Benefits, Enablers, and Challenges. Scalable Comput. Pract. Exp. 2019, 20, 259–283. [Google Scholar] [CrossRef]
  11. Pramanik, P.K.D.; Choudhury, P. Mobility-aware service provisioning for delay tolerant applications in a mobile crowd computing environment. SN Appl. Sci. 2020, 2, 1–17. [Google Scholar] [CrossRef] [Green Version]
  12. O’Dea, S. Smartphone Users Worldwide 2016–2023. 31 March 2021. Available online: https://www.statista.com/statistics/330695/number-of-smartphone-users-worldwide/ (accessed on 9 April 2021).
  13. Loke, S.W. Crowd-Powered Mobile Computing and Smart Things; Springer Briefs in Computer Science; Springer: Cham, Switzerland, 2017. [Google Scholar]
  14. Pramanik, P.K.D.; Choudhury, P. IoT Data Processing: The Different Archetypes and their Security & Privacy Assessments. In Internet of Things (IoT) Security: Fundamentals Techniques and Applications; Shandilya, S.K., Chun, S.A., Shandilya, S., Weippl, E., Eds.; River Publishers: Gistrup, Denmark, 2018; pp. 37–54. [Google Scholar]
  15. Pramanik, P.K.D.; Pal, S.; Brahmachari, A.; Choudhury, P. Processing IoT Data: From Cloud to Fog. It’s Time to be Down-to-Earth. In Applications of Security Mobile Analytic and Cloud (SMAC) Technologies for Effective Information Processing and Management; Karthikeyan, P., Thangavel, M., Eds.; IGI Global: Hershey, PA, USA, 2018; pp. 124–148. [Google Scholar]
  16. Miluzzo, E.; Cáceres, R.; Chen, Y.-F. Vision: mClouds—Computing on Clouds of Mobile Devices. In Proceedings of the 3rd ACM Workshop on Mobile Cloud Computing and Services (MCS’12), Low Wood Bay, Lake District, UK, 25 June 2012. [Google Scholar]
  17. Marinelli, E.E. Hyrax: Cloud Computing on Mobile Devices Using. Master’s Thesis, Carnegie Mellon University, Pittsburgh, PA, USA, 2009. [Google Scholar]
  18. Shila, D.M.; Shen, W.; Cheng, Y.; Tian, X.; Shen, X.S. AMCloud: Toward a Secure Autonomic Mobile Ad Hoc Cloud Computing System. IEEE Wirel. Commun. 2017, 24, 74–81. [Google Scholar] [CrossRef]
  19. Hirsch, M.; Mateos, C.; Zunino, A. Augmenting computing capabilities at the edge by jointly exploiting mobile devices: A survey. Future Gener. Comput. Syst. 2018, 88, 644–662. [Google Scholar] [CrossRef]
  20. Fernando, N.; Loke, S.W.; Rahayu, W. Computing with Nearby Mobile Devices: A Work Sharing Algorithm for Mobile Edge-Clouds. IEEE Trans. Cloud Comput. 2019, 7, 329–343. [Google Scholar] [CrossRef]
  21. Habak, K.; Ammar, M.; Harras, K.A.; Zegura, E. Femto Clouds: Leveraging Mobile Devices to Provide Cloud Service at the Edge. In Proceedings of the IEEE 8th International Conference on Cloud Computing, New York, NY, USA, 27 June–2 July 2015. [Google Scholar]
  22. Zhou, A.; Wang, S.; Li, J.; Sun, Q.; Yang, F. Optimal mobile device selection for mobile cloud service providing. J. Supercomput. 2016, 72, 3222–3235. [Google Scholar] [CrossRef]
  23. Kandappu, T.; Misra, A.; Cheng, S.-F.; Jaiman, N.; Tandriansiyah, R.; Chen, C.; Lau, H.C.; Chander, D.; Dasgupta, K. Campus-Scale Mobile Crowd-Tasking: Deployment & Behavioral Insights. In Proceedings of the 19th ACM Conference on Computer-Supported Cooperative Work & Social Computing (CSCW 16), San Francisco, CA, USA, 26 February–2 March 2016. [Google Scholar]
  24. McKnight, L.W.; Howison, J.; Bradner, S. Guest Editors’ Introduction: Wireless Grids—Distributed Resource Sharing by Mobile, Nomadic, and Fixed Devices. IEEE Internet Comput. 2004, 8, 24–31. [Google Scholar] [CrossRef]
  25. Mohamed, M.M.; Srinivas, A.V.; Janakiram, D. Moset: An anonymous remote mobile cluster computing paradigm. J. Parallel Distrib. Comput. 2005, 65, 1212–1222. [Google Scholar] [CrossRef]
  26. Kumar, M.P.; Bhat, R.R.; Alavandar, S.R.; Ananthanarayana, V.S. Distributed Public Computing and Storage using Mobile Devices. In Proceedings of the IEEE Distributed Computing, VLSI, Electrical Circuits and Robotics (DISCOVER), Mangalore, India, 13–14 August 2018. [Google Scholar]
  27. Curiel, M.; Calle, D.F.; Santamaría, A.S.; Suarez, D.F.; Flórez, L. Parallel Processing of Images in Mobile Devices using BOINC. Open Eng. 2018, 8, 87–101. [Google Scholar] [CrossRef]
  28. Yaqoob, I.; Ahmed, E.; Gani, A.; Mokhtar, S.; Imran, M. Heterogeneity-aware task allocation in mobile ad hoc cloud. IEEE Access 2017, 5, 1779–1795. [Google Scholar] [CrossRef]
  29. Žižović, M.; Pamučar, D.; Albijanić, M.; Chatterjee, P.; Pribićević, I. Eliminating Rank Reversal Problem Using a New Multi-Attribute Model—The RAFSI Method. Mathematics 2020, 8, 1015. [Google Scholar] [CrossRef]
  30. Hwang, C.L.; Yoon, K. Multiple Attribute Decision Making: Methods and Applications; Springer: New York, NY, USA, 1981. [Google Scholar]
  31. Hwang, C.-L.; Lai, Y.-J.; Liu, T.-Y. A new approach for multiple objective decision making. Comput. Oper. Res. 1993, 20, 889–899. [Google Scholar] [CrossRef]
  32. Ghorabaee, M.K.; Zavadskas, E.K.; Olfat, L.; Turskis, Z. Multi-criteria inventory classification using a new method of evaluation based on distance from average solution (EDAS). Informatica 2015, 26, 435–451. [Google Scholar] [CrossRef]
  33. Pamučar, D.; Ćirović, G. The selection of transport and handling resources in logistics centers using Multi-Attributive Border Approximation area Comparison (MABAC). Expert Syst. Appl. 2015, 42, 3016–3028. [Google Scholar] [CrossRef]
  34. Alinezhad, A.; Khalili, J. MABAC Method. In New Methods and Applications in Multiple Attribute Decision Making (MADM); International Series in Operations Research & Management Science; Springer: Berlin/Heidelberg, Germany, 2019; Volume 277, pp. 193–198. [Google Scholar]
  35. Zavadskas, E.K.; Turskis, Z. A new additive ratio assessment (ARAS) method in multicriteria decision-making. Technol. Econ. Dev. Econ. 2010, 16, 159–172. [Google Scholar] [CrossRef]
  36. Alinezhad, A.; Khalili, J. ARAS Method. In New Methods and Applications in Multiple Attribute Decision Making (MADM); International Series in Operations Research & Management Science; Springer: Berlin/Heidelberg, Germany, 2019; Volume 277, pp. 67–71. [Google Scholar]
  37. MacCrimmon, R. Decisionmaking among Multiple-Attribute Alternatives: A Survey and Consolidated Approach; Research Memorandum: Santa Monica, CA, USA, 1968. [Google Scholar]
  38. Zavadskas, E.K.; Kaklauskas, A.; Sarka, V. The new method of multi-criteria complex proportional assessment of projects. Technol. Econ. Dev. Econ. 1994, 1, 131–139. [Google Scholar]
  39. Alinezhad, A.; Khalili, J. COPRAS Method. In New Methods and Applications in Multiple Attribute Decision Making (MADM); International Series in Operations Research & Management Science; Springer: Berlin/Heidelberg, Germany, 2019; Volume 277, pp. 87–91. [Google Scholar]
  40. Duckstein, L.; Opricovic, S. Multiobjective optimization in river basin development. Water Resour. Res. 1980, 16, 14–20. [Google Scholar] [CrossRef]
  41. Opricovic, S.; Tzeng, G.H. Compromise solution by MCDM methods: A comparative analysis of VIKOR and TOPSIS. Eur. J. Oper. Res. 2004, 156, 445–455. [Google Scholar] [CrossRef]
  42. Yazdani, P.; Zarate, E.K.; Zavadskas, E.; Turskis, T. A Combined Compromise Solution (CoCoSo) method for multi-criteria decision-making problems. Manag. Decis. 2019, 57, 2501–2519. [Google Scholar] [CrossRef]
  43. Stević, Ž.; Pamučar, D.; Puška, A.; Chatterjee, P. Sustainable supplier selection in healthcare industries using a new MCDM method: Measurement of alternatives and ranking according to COmpromise solution (MARCOS). Comput. Ind. Eng. 2020, 140, 106231. [Google Scholar] [CrossRef]
  44. Mardani, A.; Jusoh, K.M.; Nor, Z.; Khalifah, N.; Zakwan, N.; Valipour, A. Multiple criteria decision-making techniques and their applications—A review of the literature from 2000 to 2014. Econ. Res.-Ekon. Istraživanja 2015, 28, 516–571. [Google Scholar] [CrossRef]
  45. Zavadskas, E.K.; Turskis, Z.; Kildienė, S. State of art surveys of overviews on MCDM/MADM methods. Technol. Econ. Dev. Econ. 2014, 20, 165–179. [Google Scholar] [CrossRef] [Green Version]
  46. Zavadskas, E.K.; Antucheviciene, J.; Adeli, H.; Turskis, Z.; Adeli, H. Hybrid multiple criteria decision making methods: A review of applications in engineering. Sci. Iran. 2016, 23, 1–20. [Google Scholar]
  47. Hosseinzadeh, H.K.; Hama, M.Y.; Ghafour, M.; Masdari, O.H.; Ahmed, H.; Khezri, H. Service Selection Using Multi-criteria Decision Making: A Comprehensive Overview. J. Netw. Syst. Manag. 2020, 28, 1639–1693. [Google Scholar] [CrossRef]
  48. Bagga, P.; Joshi, A.; Hans, R. QoS based Web Service Selection and Multi-Criteria Decision Making Methods. Int. J. Interact. Multimed. Artif. Intell. 2019, 5, 113–121. [Google Scholar] [CrossRef] [Green Version]
  49. Grgurević, I.; Kordić, G. Multi-criteria Decision-making in Cloud Service Selection and Adoption. In Proceedings of the 5th International Virtual Research Conference in Technical Disciplines, 2017. Available online: https://www.bib.irb.hr/905215?rad=905215 (accessed on 9 April 2021).
  50. Hamzeh, A.; Ahmad, K.; Noreen, A.; Deemah, A. Cloud service evaluation method-based Multi-Criteria Decision-Making: A systematic literature review. J. Syst. Softw. 2018, 139, 161–188. [Google Scholar]
  51. Büyüközkan, G.; Göçer, F.; Feyzioğlu, O. Cloud computing technology selection based on interval-valued intuitionistic fuzzy MCDM methods. Soft Comput. 2018, 22, 5091–5114. [Google Scholar] [CrossRef]
  52. Youssef, E. An Integrated MCDM Approach for Cloud Service Selection Based on TOPSIS and BWM. IEEE Access 2020, 8, 71851–71865. [Google Scholar] [CrossRef]
  53. Singla, C.; Mahajan, N.; Kaushal, S.; Verma, A.; Sangaiah, K. Modelling and Analysis of Multi-objective Service Selection Scheme in IoT-Cloud Environment. In Cognitive Computing for Big Data Systems Over IoT. Lecture Notes on Data Engineering and Communications Technologies; Sangaiah, A., Thangavelu, A., Sundaram, V.M., Eds.; Springer: Cham, Switzerland, 2018; Volume 14, pp. 63–77. [Google Scholar]
  54. Wu, H. Multi-Objective Decision-Making for Mobile Cloud Offloading: A Survey. IEEE Access 2018, 6, 3962–3976. [Google Scholar] [CrossRef]
  55. Bangui, H.; Ge, M.; Buhnova, B.; Rakrak, S.; Raghay, S.; Pitner, T. Multi-Criteria Decision Analysis Methods in the Mobile Cloud Offloading Paradigm. J. Sens. Actuator Netw. 2017, 6, 25. [Google Scholar] [CrossRef] [Green Version]
  56. Ravi, S.; Peddoju, K. Handoff Strategy for Improving Energy Efficiency and Cloud Service Availability for Mobile Devices. Wirel. Pers. Commun. 2015, 81, 101–132. [Google Scholar] [CrossRef]
  57. Mishra, K.; Ray, N.K.; Swain, A.R.; Mund, G.B.; Mishra, B.S.P. An adaptive model for resource selection and allocation in fog computing environment. Comput. Electr. Eng. 2019, 77, 217–229. [Google Scholar] [CrossRef]
  58. Gad-ElRab, A.; Alsharkawy, A.S. Multiple criteria-based efficient schemes for participants selection in mobile crowd sensing. Int. J. Commun. Netw. Distrib. Syst. 2018, 21, 384–417. [Google Scholar] [CrossRef]
  59. Nik, W.N.S.W.; Zhou, B.B.; Abawajy, J.H.; Zomaya, A.Y. Cost and Performance-Based Resource Selection Scheme for Asynchronous Replicated System in Utility-Based Computing Environment. Int. J. Adv. Sci. Eng. Inf. Technol. 2017, 7, 723–735. [Google Scholar] [CrossRef] [Green Version]
  60. Mohammadi, S.; Homayoun, S.; Zadeh, E.T. Grid Computing: Strategic Decision Making in Resource Selection. Int. J. Comput. Sci. Eng. Appl. 2012, 2, 1–12. [Google Scholar] [CrossRef]
  61. Abdullah, M.; Ali, H.A.; Haikal, A.Y. A reliable, TOPSIS-based multi-criteria, and hierarchical load balancing method for computational grid. Clust. Comput. 2019, 22, 1085–1106. [Google Scholar] [CrossRef]
  62. Kaur, M.; Kadam, S.S. Discovery of resources using MADM approaches for parallel and distributed computing. Eng. Sci. Technol. Int. J. 2017, 20, 1013–1024. [Google Scholar] [CrossRef]
  63. Yildiz, A.; Ergül, E.U. A two-phased multi-criteria decision-making approach for selecting the best smartphone. S. Afr. J. Ind. Eng. 2015, 26, 1208. [Google Scholar] [CrossRef] [Green Version]
  64. Büyüközkan, G.; Güleryüz, S. Multi Criteria Group Decision Making Approach for Smart Phone Selection Using Intuitionistic Fuzzy TOPSIS. Int. J. Comput. Intell. Syst. 2016, 9, 709–725. [Google Scholar] [CrossRef] [Green Version]
  65. Goswami, S.S.; Behera, D.K. Evaluation of the best smartphone model in the market by integrating fuzzy-AHP and PROMETHEE decision-making approach. Decis. Off. J. Indian Inst. Manag. Calcutta 2021, 48, 71–96. [Google Scholar]
  66. Kumar, S.; Singh, S.K.; Kumar, T.A.; Agrawal, S. Research Methodology: Prioritization of New Smartphones Using TOPSIS and MOORA. In Proceedings of the International Conference of Advance Research & Innovation (ICARI), Meerut, India, 30 January 2020. [Google Scholar]
  67. Aggarwal, A.; Choudhary, C.; Mehrotra, D. Evaluation of smartphones in Indian market using EDAS. Procedia Comput. Sci. 2019, 132, 236–243. [Google Scholar] [CrossRef]
  68. Irvanizam, I.; Marzuki, M.; Patria, I.; Abubakar, R. An Application for Smartphone Preference Using TODIM Decision Making Method. In Proceedings of the International Conference on Electrical Engineering and Informatics (ICELTICs), Banda Aceh, Indonesia, 19–20 September 2018. [Google Scholar]
  69. Abdulhadi, Q. Selection a New Mobile Phone by Utilize the Voting Method, AHP and Enhance TOPSIS. Int. J. Acad. Res. Bus. Soc. Sci. 2020, 10, 717–732. [Google Scholar]
  70. Triantaphyllou, E. Multi-Criteria Decision Making Methods: A Comparative Study; Springer: Boston, MA, USA, 2000. [Google Scholar]
  71. Velasquez, M.; Hester, P.T. An Analysis of Multi-Criteria Decision Making Methods. Int. J. Oper. Res. 2013, 10, 56–66. [Google Scholar]
  72. Danesh, D.; Ryan, M.J.; Abbasi, A. Multi-criteria decision-making methods for project portfolio management: A literature review. Int. J. Manag. Decis. Mak. 2017, 17, 75. [Google Scholar] [CrossRef]
  73. Marqués, I.; García, V.; Sánchez, J.S. Ranking-based MCDM models in financial management applications: Analysis and emerging challenges. Prog. Artif. Intell. 2020, 9, 171–193. [Google Scholar] [CrossRef]
  74. Purwita, A.; Subriadi, A. Literature Review—Using Multi-Criteria Decision-Making Methods in Information Technology (IT) Investment. In Proceedings of the 1st International Conference on Business Law and Pedagogy (ICBLP 2019), Sidoarjo, Indonesia, 13–15 February 2019. [Google Scholar]
  75. Özkan, B.; Özceylan, E.; Sarıçiçek, İ. GIS-based MCDM modeling for landfill site suitability analysis: A comprehensive review of the literature. Environ. Sci. Pollut. Res. 2019, 26, 30711–30730. [Google Scholar] [CrossRef] [PubMed]
  76. Alkaradaghi, K.; Ali, S.S.; Al-Ansari, N.; Laue, J.; Chabuk, A. Landfill Site Selection Using MCDM Methods and GIS in the Sulaimaniyah Governorate, Iraq. Sustainability 2019, 11, 4530. [Google Scholar] [CrossRef] [Green Version]
  77. Arıkan, E.; Vayvay, Z.T.-K.Ö. Solid waste disposal methodology selection using multi-criteria decision making methods and an application in Turkey. J. Clean. Prod. 2017, 142, 403–412. [Google Scholar] [CrossRef]
  78. Coban, A.; Ertis, I.F.; Cavdaroglu, N.A. Municipal solid waste management via multi-criteria decision making methods: A case study in Istanbul, Turkey. J. Clean. Prod. 2018, 180, 159–167. [Google Scholar] [CrossRef]
  79. Stojčić, M.; Zavadskas, E.K.; Pamučar, D.; Stević, Ž.; Mardani, A. Application of MCDM Methods in Sustainability Engineering: A Literature Review 2008–2018. Symmetry 2019, 11, 350. [Google Scholar] [CrossRef] [Green Version]
  80. Okpako, E.; Oghenenyerovwho, S. Application of MCDM method in material selection for optimal design: A review. Results Mater. 2020, 7, 100115. [Google Scholar]
  81. Singarave, B.; Shankar, D.P.; Prasanna, L. Application of MCDM Method for the Selection of Optimum Process Parameters in Turning Process. Mater. Today Proc. 2018, 5, 13464–13471. [Google Scholar] [CrossRef]
  82. Shafiee, M. Maintenance strategy selection problem: An MCDM overview. J. Qual. Maint. Eng. 2015, 21, 378–402. [Google Scholar] [CrossRef]
  83. Brzozowski, M.; Birfer, I. Applications of MCDM Methods in the ERP System Selection Process in Enterprises. Handel Wewnętrzny 2017, 3, 40–52. [Google Scholar]
  84. Utama, W.P.; Chan, A.P.; Gao, R.; Zahoor, H. Making international expansion decision for construction enterprises with multiple criteria: A literature review approach. Int. J. Constr. Manag. 2018, 18, 221–231. [Google Scholar] [CrossRef]
  85. Chowdhury, P.; Paul, S.K. Applications of MCDM methods in research on corporate sustainability: A systematic literature review. Manag. Environ. Qual. 2020, 31, 385–405. [Google Scholar] [CrossRef]
  86. Asgari, N.; Darestani, S.A. Application of multi-criteria decision making methods for balanced scorecard: A literature review investigation. Int. J. Serv. Oper. Manag. 2017, 27, 272. [Google Scholar]
  87. Zavadskas, K.; Antucheviciene, J.; Chatterjee, P. Multiple-Criteria Decision-Making (MCDM) Techniques for Business Processes Information Management. Information 2019, 10, 4. [Google Scholar] [CrossRef] [Green Version]
  88. Almeida, T.D.; Alencar, M.H.; Garcez, T.V.; Ferreira, R.J.P. A systematic literature review of multicriteria and multi-objective models applied in risk management. IMA J. Manag. Math. 2017, 28, 153–184. [Google Scholar] [CrossRef]
  89. Mukul, E.; Büyüközkan, G.; Güler, M. Evaluation of Digital Marketing Technologies with Mcdm Methods. In Proceedings of the 6th International Conference on New Ideas in Management Economics and Accounting, France, Paris, 19–21 April 2019. [Google Scholar]
  90. Jusoh, A.; Mardani, A.; Omar, R.; Štreimikienė, D.; Khalifah, Z.; Sharifara, A. Application of MCDM approach to evaluate the critical success factors of total quality management in the hospitality industry. J. Bus. Econ. Manag. 2018, 19, 399–416. [Google Scholar] [CrossRef] [Green Version]
  91. Salomon, V. (Ed.) Multi-Criteria Methods and Techniques Applied to Supply Chain Management; IntechOpen: London, UK, 2018. [Google Scholar]
  92. Schramm, V.B.; Cabral, L.P.B.; Schramm, F. Approaches for supporting sustainable supplier selection—A literature review. J. Clean. Prod. 2020, 273, 123089. [Google Scholar] [CrossRef]
  93. Yildiz, A.; Yayla, A.Y. Multi-criteria decision-making methods for supplier selection: A literature review. S. Afr. J. Ind. Eng. 2015, 26, 158–177. [Google Scholar] [CrossRef] [Green Version]
  94. Govindan, K.; Rajendran, S.; Sarkis, J.; Murugesan, P. Multi criteria decision making approaches for green supplier evaluation and selection: A literature review. J. Clean. Prod. 2015, 98, 66–83. [Google Scholar] [CrossRef]
  95. Kaya, İ.; Çolak, M.; Terzi, F. Use of MCDM techniques for energy policy and decision-making problems: A review. Int. J. Energy Res. 2018, 42, 2344–2372. [Google Scholar] [CrossRef]
  96. Siksnelyte-Butkiene, I.; Zavadskas, E.K.; Streimikiene, D. Multi-Criteria Decision-Making (MCDM) for the Assessment of Renewable Energy Technologies in a Household: A Review. Energies 2020, 13, 1164. [Google Scholar] [CrossRef] [Green Version]
  97. Kumar, A.; Sah, B.; Singh, A.R.; Deng, Y.; He, X.; Kumar, P.; Bansal, R.C. A review of multi criteria decision making (MCDM) towards sustainable renewable energy development. Renew. Sustain. Energy Rev. 2017, 69, 596–609. [Google Scholar] [CrossRef]
  98. Shao, M.; Han, Z.; Sun, J.; Xiao, C.; Zhang, S.; Zhao, Y. A review of multi-criteria decision making applications for renewable energy site selection. Renew. Energy 2020, 157, 377–403. [Google Scholar] [CrossRef]
  99. Antucheviciene, J.; Kala, Z.; Marzouk, M.; Vaidogas, E.R. Decision Making Methods and Applications in Civil Engineering. Math. Probl. Eng. 2015, 2015, 160569. [Google Scholar] [CrossRef]
  100. Zavadskas, K.; Antuchevičienė, J.; Kapliński, O. Multi-criteria decision making in civil engineering. Part II—Applications. Eng. Struct. Technol. 2015, 7, 151–167. [Google Scholar] [CrossRef]
  101. Penadés-Plà, V.; García-Segura, T.; Martí, J.V.; Yepes, V. A Review of Multi-Criteria Decision-Making Methods Applied to the Sustainable Bridge Design. Sustainability 2016, 8, 1295. [Google Scholar] [CrossRef] [Green Version]
  102. Tan, T.; Mills, G.; Papadonikolaki, E.; Liu, Z. Combining multi-criteria decision making (MCDM) methods with building information modelling (BIM): A review. Autom. Constr. 2021, 121, 103451. [Google Scholar] [CrossRef]
  103. Pavlovskis, M.; Antucheviciene, J.; Migilinskas, D. Application of MCDM and BIM for Evaluation of Asset Redevelopment Solutions. Stud. Inform. Control 2016, 25, 293–302. [Google Scholar] [CrossRef] [Green Version]
  104. Si, J.; Marjanovic-Halburd, L.; Nasiri, F.; Bell, S. Assessment of building-integrated green technologies: A review and case study on applications of Multi-Criteria Decision Making (MCDM) method. Sustain. Cities Soc. 2016, 27, 106–115. [Google Scholar] [CrossRef]
  105. Gibari, S.E.; Gómez, T.; Ruiz, F. Building composite indicators using multicriteria methods: A review. J. Bus. Econ. 2019, 89, 1–24. [Google Scholar] [CrossRef]
  106. Morkūnaitė, Ž.; Kalibatas, D.; Kalibatienė, D. A bibliometric data analysis of multi-criteria decision making methods in heritage buildings. J. Civ. Eng. Manag. 2019, 25, 76–99. [Google Scholar] [CrossRef]
  107. Hoang, T.T.; Dupont, L.; Camargo, M. Application of Decision-Making Methods in Smart City Projects: A Systematic Literature Review. Smart Cities 2019, 2, 433–452. [Google Scholar] [CrossRef] [Green Version]
  108. Gebre, S.L.; Cattrysse, D.; Orshoven, J.V. Multi-Criteria Decision-Making Methods to Address Water Allocation Problems: A Systematic Review. Water 2021, 13, 125. [Google Scholar] [CrossRef]
  109. Hassan, S.A.H.S.; Tan, S.C.; Yusof, K.M. MCDM for Engineering Education: Literature Review and Research Issues. In Engineering Education for a Smart Society (GEDC 2016 WEEF 2016). Advances in Intelligent Systems and Computing; Auer, M., Kim, K.S., Eds.; Springer: Cham, Switzerland, 2018; Volume 627, pp. 204–214. [Google Scholar]
  110. Zare, M.; Pahl, C.; Rahnama, H.; Nilashi, M.; Mardani, A.; Ibrahim, O.; Ahmadi, H. Multi-criteria decision making approach in E-learning: A systematic review and classification. Appl. Soft Comput. 2016, 45, 108–128. [Google Scholar] [CrossRef]
  111. Pal, S.; Pramanik, P.K.D.; Alsulami, M.; Nayyar, A.; Zarour, M.; Choudhury, P. Using DEMATEL for Contextual Learner Modelling in Personalised and Ubiquitous Learning. Comput. Mater. Contin. 2021, 69. [Google Scholar] [CrossRef]
  112. Khan, Z.; Ansari, T.S.A.; Siddiquee, A.N.; Khan, Z.A. Selection of E-learning websites using a novel Proximity Indexed Value (PIV) MCDM method. J. Comput. Educ. 2019, 6, 241–256. [Google Scholar] [CrossRef]
  113. Pekkaya, M. Career Preference of University Students: An Application of MCDM Methods. Procedia Econ. Financ. 2015, 23, 249–255. [Google Scholar] [CrossRef] [Green Version]
  114. Rajabi, F.; Molaeifar, H.; Jahangiri, M.; Taheri, S.; Banaee, S.; Farhadi, P. Occupational stressors among firefighters: Application of multi-criteria decision making (MCDM)Techniques. Heliyon 2020, 6, e03820. [Google Scholar] [CrossRef]
  115. Afshari, R.; Yusuff, R.M.; Hong, T.S.; Ismail, Y.B. A review of the applications of multi criteria decision making for personnel selection problem. Afr. J. Bus. Manag. 2011, 5, 28. [Google Scholar]
  116. Alp, S.; Özkan, T.K. Job Choice with Multi-Criteria Decision Making Approach in a Fuzzy Environment. Int. Rev. Manag. Mark. 2015, 5, 165–172. [Google Scholar]
  117. Pérez, C.; Carrillo, M.H.; Montoya-Torres, J.R. Multi-criteria approaches for urban passenger transport systems: A literature review. Ann. Oper. Res. 2015, 226, 69–87. [Google Scholar] [CrossRef]
  118. Kiciński, M.; Solecka, K. Application of MCDA/MCDM methods for an integrated urban public transportation system—Case study, city of Cracow. Arch. Transp. 2018, 46, 71–84. [Google Scholar] [CrossRef]
  119. Nassereddine, M.; Eskandari, H. An integrated MCDM approach to evaluate public transportation systems in Tehran. Transp. Res. Part A Policy Pract. 2017, 106, 427–439. [Google Scholar] [CrossRef] [Green Version]
  120. Yannis, G.; Kopsacheili, A.; Dragomanovits, A.; Petraki, V. State-of-the-art review on multi-criteria decision-making in the transport sector. J. Traffic Transp. Eng. 2020, 7, 413–431. [Google Scholar]
  121. Karacan, I.; Tozan, H.; Karatas, M. Multi Criteria Decision Methods in Health Technology Assessment: A Brief Literature Review. Eurasian J. Health Technol. Assess. 2016, 1, 12–19. [Google Scholar]
  122. Gul, M. A review of occupational health and safety risk assessment approaches based on multi-criteria decision-making methods and their fuzzy versions. Hum. Ecol. Risk Assess. Int. J. 2018, 24, 1723–1760. [Google Scholar] [CrossRef]
  123. Mutlu, M.; Tuzkaya, G.; Sennaroglu, B. Multi-criteria decision making techniques for healthcare service quality evaluation: A literature review. Sigma J. Eng. Nat. Sci. 2017, 35, 501–512. [Google Scholar]
  124. Zanakis, S.H.; Solomon, A.; Wisharta, N.; Dublish, S. Multi-attribute decision making: A simulation comparison of select methods. Eur. J. Oper. Res. 1998, 107, 507–529. [Google Scholar] [CrossRef]
  125. Annette, A.B.R.; Chandran, S.P. Comparison of multi criteria decision making algorithms for ranking cloud renderfarm services. Indian J. Sci. Technol. 2016, 9, 31. [Google Scholar]
  126. Piegat, A.; Sałabun, W. Comparative Analysis of MCDM Methods for Assessing the Severity of Chronic Liver Disease. In Proceedings of the Artificial Intelligence and Soft Computing (ICAISC 2015). Lecture Notes in Computer Science, Zakopane, Poland, 14–18 June 2015; Rutkowski, L., Korytkowski, M., Scherer, R., Tadeusiewicz, R., Zadeh, L., Zurada, J., Eds.; Springer: Cham, Switzerland, 2015; Volume 9119, pp. 228–238. [Google Scholar]
  127. Mathew, M.; Sahu, S. Comparison of new multi-criteria decision making methods for material handling equipment selection. Manag. Sci. Lett. 2018, 8, 139–150. [Google Scholar] [CrossRef]
  128. Ghosh, I.; Biswas, S. A comparative analysis of multi-criteria decision models for ERP package selection for improving supply chain performance. Asia-Pac. J. Manag. Res. Innov. 2016, 12, 250–270. [Google Scholar] [CrossRef]
  129. Nesticò, A.; Somma, P. Comparative Analysis of Multi-Criteria Methods forthe Enhancement of Historical Buildings. Sustainability 2019, 11, 4526. [Google Scholar] [CrossRef] [Green Version]
  130. Moradian, M.; Modanloo, V.; Aghaiee, S. Comparative analysis of multi criteria decision making techniques for material selection of brake booster valve body. J. Traffic Transp. Eng. 2019, 6, 526–534. [Google Scholar] [CrossRef]
  131. Ghaleb, M.; Kaid, H.; Alsamhan, A.; Mian, S.H.; Hidri, L. Assessment and Comparison of Various MCDM Approaches in the Selection of Manufacturing Process. Adv. Mater. Sci. Eng. 2020, 2020, 4039253. [Google Scholar] [CrossRef]
  132. Ceballos, M.T.; Lamata, D.; Pelta, A. A comparative analysis of multi-criteria decision-making methods. Prog. Artif. Intell. 2016, 5, 115–322. [Google Scholar] [CrossRef]
  133. Mulliner, E.; Malys, N.; Maliene, V. Comparative analysis of MCDM methods for the assessment of sustainable housing affordability. Omega 2016, 59, 146–156. [Google Scholar] [CrossRef]
  134. Vuković, M.; Pivac, S.; Babić, Z. Comparative analysis of stock selection using a hybrid MCDM approach and modern portfolio theory. Croat. Rev. Econ. Bus. Soc. Stat. 2020, 6, 58–68. [Google Scholar] [CrossRef]
  135. Sałabun, W.; Piegat, A. Comparative analysis of MCDM methods for the assessment of mortality in patients with acute coronary syndrome. Artif. Intell. Rev. 2017, 48, 557–571. [Google Scholar] [CrossRef]
  136. Valipour, A.; Sarvari, H.; Tamošaitiene, J. Risk Assessment in PPP Projects by Applying Different MCDM Methods and Comparative Results Analysis. Adm. Sci. 2018, 8, 80. [Google Scholar] [CrossRef] [Green Version]
  137. Lee, H.-C.; Chang, C.-T. Comparative analysis of MCDM methods for ranking renewable energy sources in Taiwan. Renew. Sustain. Energy Rev. 2018, 92, 883–896. [Google Scholar] [CrossRef]
  138. Karande, P.; Zavadskas, E.K.; Chakraborty, S. A study on the ranking performance of some MCDM methods for industrial robot selection problems. Int. J. Ind. Eng. Comput. 2016, 7, 399–422. [Google Scholar] [CrossRef]
  139. Harirchian, E.; Jadhav, K.; Mohammad, K.; Hosseini, S.E.A.; Lahmer, T. A Comparative Study of MCDM Methods Integrated with Rapid Visual Seismic Vulnerability Assessment of Existing RC Structures. Appl. Sci. 2020, 10, 6411. [Google Scholar] [CrossRef]
  140. Sidhu, J.; Singh, S. Design and Comparative Analysis of MCDM-based Multi-dimensional Trust Evaluation Schemes for Determining Trustworthiness of Cloud Service Providers. J. Grid Comput. 2017, 15, 197–218. [Google Scholar] [CrossRef]
  141. Alrababah, S.A.A.; Gan, K.H.; Tan, T.-P. Comparative analysis of MCDM methods for product aspect ranking: TOPSIS and VIKOR. In Proceedings of the 8th International Conference on Information and Communication Systems (ICICS), Irbid, Jordan, 4–6 April 2017. [Google Scholar]
  142. Kaya, S.K. Evaluation of the Effect of COVID-19 on Countries’ Sustainable Development Level: A comparative MCDM framework. Oper. Res. Eng. Sci. Theory Appl. 2020, 3, 101–122. [Google Scholar] [CrossRef]
  143. Li, T.; Li, A.; Guo, X. The sustainable development-oriented development and utilization of renewable energy industry—A comprehensive analysis of MCDM methods. Energy 2020, 212, 118694. [Google Scholar] [CrossRef]
  144. Sun, R.; Gong, Z.; Gao, G.; Shah, A.A. Comparative analysis of Multi-Criteria Decision-Making methods for flood disaster risk in the Yangtze River Delta. Int. J. Disaster Risk Reduct. 2020, 51, 101768. [Google Scholar] [CrossRef]
  145. Antoniou, F.; Aretoulis, G.N. Comparative analysis of multi-criteria decision making methods in choosing contract type for highway construction in Greece. Int. J. Manag. Decis. Mak. 2018, 17, 1–28. [Google Scholar]
  146. Madhu, P.; Dhanalakshmi, C.S.; Mathew, M. Multi-criteria decision-making in the selection of a suitable biomass material for maximum bio-oil yield during pyrolysis. Fuel 2020, 277, 118109. [Google Scholar] [CrossRef]
  147. Hezer, S.; Gelmez, E.; Özceylan, E. Comparative analysis of TOPSIS, VIKOR and COPRAS methods for the COVID-19 Regional Safety Assessment. J. Infect. Public Health 2021, 14, 775–786. [Google Scholar] [CrossRef] [PubMed]
  148. Keshavarz-Ghorabaee, M.; Zavadskas, E.K.; Turskis, Z.; Antucheviciene, J. A comparative analysis of the rank reversal phenomenon in the EDAS and TOPSIS methods. Econ. Comput. Econ. Cybern. Stud. Res. 2018, 52, 121–134. [Google Scholar]
  149. Kokaraki, N.; Hopfe, C.J.; Robinson, E.; Nikolaidou, E. Testing the reliability of deterministic multi-criteria decision-making methods using building performance simulation. Renew. Sustain. Energy Rev. 2019, 112, 991–1007. [Google Scholar] [CrossRef]
  150. Dožić, S.; Kalić, M. Aircraft Type Selection Problem: Application of Different MCDM Methods. In Advanced Concepts, Methodologies and Technologies for Transportation and Logistics (EURO 2016, EWGT 2016). Advances in Intelligent Systems and Computing; Żak, J., Hadas, Y., Rossi, R., Eds.; Springer: Cham, Switzerlands, 2018; Volume 572, pp. 156–175. [Google Scholar]
  151. Srisawat, C.; Payakpate, J. Comparison of MCDM methods for intercrop selection in rubber plantations. J. Inf. Commun. Technol. 2016, 15, 165–182. [Google Scholar] [CrossRef]
  152. Widianta, M.M.D.; Rizaldi, T.; Setyohadi, D.P.S.; Riskiawan, H.Y. Comparison of Multi-Criteria Decision Support Methods (AHP, TOPSIS, SAW & PROMENTHEE) for Employee Placement. J. Phys. Conf. Ser. 2018, 953, 012116. [Google Scholar]
  153. Balusa, C.; Gorai, A.K. A Comparative Study of Various Multi-criteria Decision-Making Models in Underground Mining Method Selection. J. Inst. Eng. India Ser. D 2019, 100, 105–121. [Google Scholar] [CrossRef]
  154. Ishizaka, A.; Siraj, S. Are multi-criteria decision-making tools useful? An experimental comparative study of three methods. Eur. J. Oper. Res. 2018, 264, 462–471. [Google Scholar] [CrossRef] [Green Version]
  155. Alkahtani, M.; Al-Ahmari, A.; Kaid, H.; Sonboa, M. Comparison and evaluation of multi-criteria supplier selection approaches: A case study. Adv. Mech. Eng. 2019, 11. [Google Scholar] [CrossRef] [Green Version]
  156. Vakilipour, S.; Sadeghi-Niaraki, A.; Ghodousi, M.; Choi, S.-M. Comparison between Multi-Criteria Decision-Making Methods and Evaluating the Quality of Life at Different Spatial Levels. Sustainability 2021, 13, 4067. [Google Scholar] [CrossRef]
  157. Hodgett, E. Comparison of Multi-Criteria Decision-Making Methods for Equipment Selection. Int. J. Adv. Manuf. Technol. 2016, 85, 1145–1157. [Google Scholar] [CrossRef]
  158. Sari, F. Forest fire susceptibility mapping via multi-criteria decision analysis techniques for Mugla, Turkey: A comparative analysis of VIKOR and TOPSIS. For. Ecol. Manag. 2021, 480, 118644. [Google Scholar] [CrossRef]
  159. Biswas, S. Measuring performance of healthcare supply chains in India: A comparative analysis of multi-criteria decision making methods. Decis. Mak. Appl. Manag. Eng. 2020, 3, 162–189. [Google Scholar] [CrossRef]
  160. Anitha, J.; Das, R. A Comparative Analysis of Multi-criteria Decision-Making Techniques to Optimize the Process Parameters in Electro Discharge Machine. In Recent Trends in Mechanical Engineering. Lecture Notes in Mechanical Engineering; Narasimham, G.S.V.L., Babu, A.V., Reddy, S.S., Dhanasekaran, R., Eds.; Springer: Singapore, 2021; pp. 675–686. [Google Scholar]
  161. Dewi, K.; Hanggara, B.T.; Pinandito, A. A Comparison Between AHP and Hybrid AHP for Mobile Based Culinary Recommendation System. Int. J. Interact. Mob. Technol. 2018, 12, 133–140. [Google Scholar] [CrossRef] [Green Version]
  162. Martin, A.; Lakshmi, T.M.; Venkatesan, V.P. A Study on Evaluation Metrics for Multi Criteria Decision Making (MCDM) Methods—TOPSIS, COPRAS & GRA. Int. J. Comput. Algorithm 2018, 7, 29–37. [Google Scholar]
  163. Wu, Z.; Abdul-Nour, G. Comparison of Multi-Criteria Group Decision-Making Methods for Urban Sewer Network Plan Selection. CivilEng 2020, 1, 26–48. [Google Scholar] [CrossRef]
  164. Jozaghi, A.; Alizadeh, B.; Hatami, M.; Flood, I.; Khorrami, M.; Khodaei, N.; Tousi, E.G. A Comparative Study of the AHP and TOPSIS Techniques for Dam Site Selection Using GIS: A Case Study of Sistan and Baluchestan Province, Iran. Geosciences 2018, 8, 494. [Google Scholar] [CrossRef] [Green Version]
  165. Ghorabaee, K.; Zavadskas, E.K.; Amiri, M.; Turskis, Z. Extended EDAS method for fuzzy multi-criteria decision-making: An application to supplier selection. Int. J. Comput. Commun. Control 2016, 11, 358–371. [Google Scholar] [CrossRef] [Green Version]
  166. Stanujkic, D.; Jovanovic, R. Measuring a quality of faculty website using ARAS method. In Proceedings of the International Scientific Conference Contemporary Issues in Business, Management and Education, Vilnius, Lithuania, 9–10 May 2012. [Google Scholar]
  167. Zavadskas, K.; Turskis, Z.; Vilutiene, T. Multiple criteria analysis of foundation instalment alternatives by applying Additive Ratio Assessment (ARAS) method. Arch. Civ. Mech. Eng. 2010, 10, 123–141. [Google Scholar] [CrossRef]
  168. Ghenai, C.; Albawab, M.; Bettayeb, M. Sustainability indicators for renewable energy systems using multi-criteria decision-making model and extended SWARA/ARAS hybrid method. Renew. Energy 2020, 146, 580–597. [Google Scholar] [CrossRef]
  169. Balezentiene, L.; Kusta, A. Reducing greenhouse gas emissions in grassland ecosystems of the central Lithuania: Multi-criteria evaluation on a basis of the ARAS method. Sci. World J. 2012, 2012, 908384. [Google Scholar] [CrossRef]
  170. Van Hoan, P.; Ha, Y. ARAS-FUCOM approach for VPAF fighter aircraft selection. Decis. Sci. Lett. 2021, 10, 53–62. [Google Scholar] [CrossRef]
  171. Roy, J.; Ranjan, A.; Debnath, A.; Kar, S. An extended MABAC for multi-attribute decision making using trapezoidal interval type-2 fuzzy numbers. arXiv 2016, arXiv:1607.01254. [Google Scholar]
  172. Bobar, Z.; Božanić, D.; Djurić, K.; Pamučar, D. Ranking and assessment of the efficiency of social media using the fuzzy AHP-Z number model-fuzzy MABAC. Acta Polytech. Hung. 2020, 17, 43–70. [Google Scholar] [CrossRef]
  173. Büyüközkan, G.; Mukul, E.; Kongar, E. Health tourism strategy selection via SWOT analysis and integrated hesitant fuzzy linguistic AHP-MABAC approach. Socio-Econ. Plan. Sci. 2021, 74, 100929. [Google Scholar] [CrossRef]
  174. Biswas, S.; Bandyopadhyay, G.; Guha, B.; Bhattacharjee, M. An ensemble approach for portfolio selection in a multi-criteria decision making framework. Decis. Mak. Appl. Manag. Eng. 2019, 2, 138–158. [Google Scholar] [CrossRef]
  175. Sharma, K.; Roy, J.; Kar, S.; Prentkovskis, O. Multi Criteria Evaluation Framework for Prioritizing Indian Railway Stations Using Modified Rough AHP-MABAC Method. Transp. Telecommun. J. 2018, 19, 113–127. [Google Scholar] [CrossRef] [Green Version]
  176. Roy, J.; Chatterjee, K.; Bandyopadhyay, A.; Kar, S. Evaluation and selection of medical tourism sites: A rough analytic hierarchy process based multi-attributive border approximation area comparison approach. Expert Syst. 2018, 35, e12232. [Google Scholar] [CrossRef] [Green Version]
  177. Yu, S.M.; Wang, J.; Wang, J.Q. An interval type-2 fuzzy likelihood-based MABAC approach and its application in selecting hotels on a tourism website. Int. J. Fuzzy Syst. 2017, 19, 47–61. [Google Scholar] [CrossRef]
  178. Chatterjee, P.; Chakraborty, S. Flexible manufacturing system selection using preference ranking methods: A comparative study. Int. J. Ind. Eng. Comput. 2014, 5, 315–338. [Google Scholar] [CrossRef]
  179. Zavadskas, E.K.; Kaklauskas, A.; Turskis, Z.; Tamošaitienė, J. Multi-attribute decision-making model by applying grey numbers. Informatica 2009, 20, 305–320. [Google Scholar] [CrossRef]
  180. Stević, Ž.; Brković, N. A Novel Integrated FUCOM-MARCOS Model for Evaluation of Human Resources in a Transport Company. Logistics 2020, 4, 4. [Google Scholar] [CrossRef] [Green Version]
  181. Stanković, M.; Stević, Ž.; Das, D.K.; Subotić, M.; Pamučar, D. A new fuzzy MARCOS method for road traffic risk analysis. Mathematics 2020, 8, 457. [Google Scholar] [CrossRef] [Green Version]
  182. Shannon, E. A mathematical theory of communication. Bell Syst. Tech. J. 1948, 27, 379–423. [Google Scholar] [CrossRef] [Green Version]
  183. Suh, Y.; Park, Y.; Kang, D. Evaluating mobile services using integrated weighting approach and fuzzy VIKOR. PLoS ONE 2019, 14, e0217786. [Google Scholar]
  184. Abidin, Z.; Rusli, R.; Shariff, A.M. Technique for Order Performance by Similarity to Ideal Solution (TOPSIS)-entropy methodology for inherent safety design decision making tool. Procedia Eng. 2016, 148, 1043–1050. [Google Scholar] [CrossRef] [Green Version]
  185. Liu, P.; Zhang, X. Research on the supplier selection of a supply chain based on entropy weight and improved ELECTRE-III method. Int. J. Prod. Res. 2011, 49, 637–646. [Google Scholar] [CrossRef]
  186. Laha, S.; Biswas, S. A hybrid unsupervised learning and multi-criteria decision making approach for performance evaluation of Indian banks. Accounting 2019, 5, 169–184. [Google Scholar] [CrossRef]
  187. Gupta, S.; Bandyopadhyay, G.; Bhattacharjee, M.; Biswas, S. Portfolio Selection using DEA-COPRAS at Risk–Return Interface Based on NSE (India). Int. J. Innov. Technol. Explor. Eng. 2019, 8, 4078–4086. [Google Scholar]
  188. Karmakar, P.; Dutta, P.; Biswas, S. Assessment of mutual fund performance using distance based multi-criteria decision making techniques—An Indian perspective. Res. Bull. 2018, 44, 17–38. [Google Scholar]
  189. Li, X.; Wang, K.; Liu, L.; Xin, J.; Yang, H.; Gao, C. Application of the entropy weight and TOPSIS method in safety evaluation of coal mines. Procedia Eng. 2011, 26, 2085–2091. [Google Scholar] [CrossRef] [Green Version]
  190. Zou, Z.H.; Yi, Y.; Sun, J.N. Entropy method for determination of weight of evaluating indicators in fuzzy synthetic evaluation for water quality assessment. J. Environ. Sci. 2006, 18, 1020–1023. [Google Scholar] [CrossRef]
  191. Pramanik, P.K.D.; Sinhababu, N.; Kwak, K.S.; Choudhury, P. Deep Learning-based Resource Availability Prediction for Local Mobile Crowd Computing. IEEE Access 2021, 9. [Google Scholar] [CrossRef]
  192. Simanaviciene, R.; Ustinovichius, L. Sensitivity analysis for multiple criteria decision making methods: TOPSIS and SAW. Procedia-Soc. Behav. Sci. 2010, 2, 7743–7744. [Google Scholar] [CrossRef] [Green Version]
  193. Mukhametzyanov, I.; Pamučar, D.S. A sensitivity analysis in MCDM problems: A statistical approach. Decis. Mak. Appl. Manag. Eng. 2018, 1, 51–80. [Google Scholar] [CrossRef]
  194. Biswas, S.; Pamučar, D.S. Facility location selection for b-schools in Indian context: A multi-criteria group decision based analysis. Axioms 2020, 9, 77. [Google Scholar] [CrossRef]
  195. Pamučar, S.; Ćirović, G.; Božanić, D. Application of interval valued fuzzy-rough numbers in multi-criteria decision making: The IVFRN-MAIRCA model. Yugosl. J. Oper. Res. 2019, 29, 221–247. [Google Scholar] [CrossRef] [Green Version]
  196. Pamučar, S.; Božanić, D.; Ranđelović, A. Multi-criteria decision making: An example of sensitivity analysis. Serb. J. Manag. 2017, 12, 1–27. [Google Scholar] [CrossRef] [Green Version]
  197. Ali, Z.; Mahmood, T.; Ullah, K.; Khan, Q. Einstein Geometric Aggregation Operators using a Novel Complex Interval-valued Pythagorean Fuzzy Setting with Ap-plication in Green Supplier Chain Management. Rep. Mech. Eng. 2021, 2, 105–134. [Google Scholar] [CrossRef]
  198. Biswas, S.; Anand, O.P. Logistics Competitiveness Index-Based Comparison of BRICS and G7 Countries: An Integrated PSI-PIV Approach. IUP J. Supply Chain Manag. 2020, 17, 32–57. [Google Scholar]
  199. Chakraborty, S.; Kumar, V. Development of an intelligent decision model for non-traditional machining processes. Decis. Mak. Appl. Manag. Eng. 2021, 4, 194–214. [Google Scholar] [CrossRef]
  200. Bozanic, D.; Randjelovic, A.; Radovanovic, M.; Tesic, D. A hybrid LBWA—IR-MAIRCA multi-criteria decision-making model for determination of constructive elements of weapons. Facta Univ. Ser. Mech. Eng. 2020, 18, 399–418. [Google Scholar]
  201. Pamucar, D.; Ecer, F. Prioritizing the weights of the evaluation criteria under fuzziness: The fuzzy full consistency method—FUCOM-F. Facta Univ. Ser. Mech. Eng. 2020, 18, 419–437. [Google Scholar]
Figure 1. Typical MCDM stages.
Figure 1. Typical MCDM stages.
Symmetry 13 01713 g001
Figure 2. Research framework.
Figure 2. Research framework.
Symmetry 13 01713 g002
Figure 3. SMD data collection setup.
Figure 3. SMD data collection setup.
Symmetry 13 01713 g003
Figure 4. Pictorial representation of sensitivity analysis (Case 1) (a) EDAS, (b) COPRAS, (c) ARAS, (d) MARCOS, (e) MABAC.
Figure 4. Pictorial representation of sensitivity analysis (Case 1) (a) EDAS, (b) COPRAS, (c) ARAS, (d) MARCOS, (e) MABAC.
Symmetry 13 01713 g004
Figure 5. Runtime comparison of MCDM methods on the laptop for each case when the dataset is in the memory.
Figure 5. Runtime comparison of MCDM methods on the laptop for each case when the dataset is in the memory.
Symmetry 13 01713 g005
Figure 6. Runtime comparison of MCDM methods on the laptop for each case when the dataset is in the secondary storage.
Figure 6. Runtime comparison of MCDM methods on the laptop for each case when the dataset is in the secondary storage.
Symmetry 13 01713 g006
Figure 7. Runtime comparison of MCDM methods on the smartphone for each case when the dataset is in the phone storage.
Figure 7. Runtime comparison of MCDM methods on the smartphone for each case when the dataset is in the phone storage.
Symmetry 13 01713 g007
Figure 8. Runtime comparison of MCDM methods on the smartphone for each case when the dataset is in the memory.
Figure 8. Runtime comparison of MCDM methods on the smartphone for each case when the dataset is in the memory.
Symmetry 13 01713 g008
Figure 9. Runtime comparison for Entropy method.
Figure 9. Runtime comparison for Entropy method.
Symmetry 13 01713 g009
Table 1. The popular MCDM approaches and their respective popular representatives.
Table 1. The popular MCDM approaches and their respective popular representatives.
MCDM ApproachRepresentative ExampleReference
Distance-based methodTOPSIS[30,31]
EDAS[32]
Area-based comparison and approximation methodMABAC[33,34]
Ratio-based additive methodARAS[35,36]
SAW[37]
COPRAS[38,39]
Algorithms that work under compromising situationsVIKOR[40,41]
CoCoSo[42]
MARCOS[43]
RAFSI[29]
Table 2. List of acronyms.
Table 2. List of acronyms.
AcronymFull Form
AHPAnalytic Hierarchy Process
ANPAnalytic Network Process
ARASAdditive Ratio Assessment
BWMBest Worst Method
CoCoSoCombined Compromise Solution
COMETCharacteristic Objects METhod
COPRASComplex Proportional Assessment
COPRASCOmplex PRoportional ASsessment
CPUCentral Processing Unit
DEAData Envelopment Analysis
DMUDecision Making Unit
EDASEvaluation based on Distance from Average Solution
EDASEvaluation based on Distance from Average Solution
ELECTREELimination Et Choix Traduisant la REalité
ESMEven Swaps Method
GDSSGroup Decision Support System
GPUGraphics Processing Unit
GRAGrey Relational Analysis
HPCHigh Performance Computing
IoEInternet of Everything
IoTInternet of Things
MABACMulti-Attributive Border Approximation Area Comparison
MACBETHMeasuring Attractiveness by a Categorical Based Evaluation Technique
MARCOSMeasurement of Alternatives and Ranking according to COmpromise Solution
MAREMulti-Attribute Range Evaluations
MAUTMulti-Attribute Utility Theory
MCCMobile Crowd Computing
MCDMMulti Criteria Decision Making
MEWMultiplicative Exponential Weighting
MOORAMulti-Objective Optimization on the basis of Ratio Analysis
MULTIMOORAMultiplicative MOORA
PAPRIKAPotentially All Pairwise RanKings of all possible Alternatives
PIPRECIAPIvot Pairwise RElative Criteria Importance Assessment
PROMETHEEPreference Ranking Organization METHod for Enrichment Evaluation
RAFSIRanking of Alternatives through Functional mapping of criterion sub-intervals into a Single Interval
RAMRandom Access Memory
REMBRANDTRatio Estimations in Magnitudes or deci-Bells to Rate Alternatives which are Non-DominaTed
SAWSimple Additive Weighting
SMARTSimple Multi-Attribute Rating Technique
SMDSmart Mobile Device
SoCSystem on Chip
SWARAStepwise Weight Assessment Ratio Analysis
TOPSISTechnique for Order Preference by Similarity to Ideal Solution
VIKORViše Kriterijumska optimizacija i Kompromisno Rešenje
WASPASWeighted Aggregated Sum Product Assessment
WPMWeighted Product Method
WSMWeighted Sum Model
Table 3. Examples of various applications of MCDM methods.
Table 3. Examples of various applications of MCDM methods.
Application Areas of MCDM MethodsSelected References
Finance and economics[72,73,74]
Waste management[75,76,77,78]
Engineering and production[79,80,81,82]
Organisations and corporates[83,84,85,86]
Business process and operations[87,88,89,90]
Supply chain management[91,92,93,94]
Energy sector[95,96,97,98]
Civil engineering[99,100,101]
Building construction and management[102,103,104,105]
City and society[106,107,108]
Education and e-learning[109,110,111,112]
Careers and job[113,114,115,116]
Transportation[117,118,119,120]
Healthcare[121,122,123]
Table 4. Survey of comparative analysis of different MCDM methods.
Table 4. Survey of comparative analysis of different MCDM methods.
ReferenceMCDM Methods ComparedApplication FocusAnalysis Performed
Sensitivity AnalysisResult ComparisonStatistical Test/AnalysisRank ReversalComputation/Time Complexity
[124]ELECTRE, TOPSIS, MEW, SAW, and four versions of AHPGeneral MCDM problem of ranking
[125]AHP and SAWRanking cloud render farm services
[126]TOPSIS, AHP, and COMETAssessing the severity of chronic liver disease
[127]CODAS, EDAS, WASPAS, and MOORASelecting material handling equipment
[128]TOPSIS, DEMATEL, and MACBETHERP package selection
[129]AHP, ELECTRE, TOPSIS, and VIKOREnhancement of historical buildings
[130]MOORA, TOPSIS, and VIKORMaterial selection of brake booster valve body
[131]AHP, TOPSIS, and VIKORManufacturing process selection
[132]Multi-MOORA, TOPSIS, and three variants of VIKORRandomly generated MCDM problems (i.e., decision matrices) as per [124].
[133]WPM, WSM, revised AHP, TOPSIS, and COPRASSustainable housing affordability
[134]SAW, TOPSIS, PROMETHEE, and COPRASStock selection using modern portfolio theory
[135]COMET, TOPSIS, and AHPAssessment of mortality in patients with acute coronary syndrome
[136]SWARA, COPRAS, fuzzy ANP, fuzzy AHP, fuzzy TOPSIS, SAW, and EDASRisk assessment in public-private partnership projects
[137]WSM, VIKOR, TOPSIS, and ELECTRERanking renewable energy sources
[138]WSM, WPM, WASPAS, MOORA, and MULTIMOORAIndustrial robot selection
[139]WSM, WPM, AHP, and TOPSISSeismic vulnerability assessment of RC structures
[140]AHP, TOPSIS, and PROMETHEEDetermining trustworthiness of cloud service providers
[141]TOPSIS and VIKORFinding most important product aspects in customer reviews
[142]MABAC and WASPASEvaluating the effect of COVID-19 on countries’ sustainable development
[143]WSM, TOPSIS, PROMETHEE, ELECTRE, and VIKORUtilization of renewable energy industry
[144]WSM, TOPSIS, and ELECTREFlood disaster risk analysis
[145]MAUT, TOPSIS, PROMETHEE, and PROMETHEE GDSSChoosing contract type for highway construction in Greece
[146]TOPSIS, VIKOR, EDAS, and PROMETHEE-IISuitable biomass material selection for maximum bio-oil yield
[147]TOPSIS, VIKOR, and COPRASCOVID-19 regional safety assessment
[148]EDAS and TOPSISGeneral MCDM problem
[149]AHP, TOPSIS, ELECTRE III, and PROMETHEE IIBuilding performance simulation
[150]AHP, fuzzy AHP, and ESMAircraft type selection
[151]AHP, TOPSIS, and SAWIntercrop selection in rubber plantations
[152]AHP, TOPSIS, SAW, and PROMETHEEEmployee placement
[153]TOPSIS, VIKOR, improved ELECTRE, PROMETHEE II, and WPMMining method selection
[154]AHP, SMART, and MACBETHIncentive-based experiment (ranking coffee shops within university campus)
[155]AHP, fuzzy AHP, and fuzzy TOPSISSupplier selection
[156]TOPSIS, SAW, VIKOR, and ELECTREEvaluating the quality of urban life
[157]AHP, MARE, ELECTRE IIIEquipment selection
[158]VIKOR and TOPSISForest fire susceptibility mapping
[159]PIPRECIA, MABAC, CoCoSo, and MARCOSMeasuring the performance of healthcare supply chains
[160]MOORA, MULTIMOORA, and TOPSISOptimize the process parameters in the electro-discharge machine
[161]AHP, AHP TOPSIS, and fuzzy AHPMobile-based culinary recommendation system
[162]TOPSIS, COPRAS, and GRAEvaluation of teachers
[163]AHP, TOPSIS, ELECTRE III, and PROMETHEE IIUrban sewer network plan selection
[164]TOPSIS and AHPDam site selection using GIS
This paperEDAS, ARAS, MABAC, COPRAS, and MARCOSResource selection in mobile crowd computing
Table 5. Merits and demerits of the MCDM methods considered in this study.
Table 5. Merits and demerits of the MCDM methods considered in this study.
MCDM MethodMeritsDemerits
EDAS
  • Useful when there are conflicting criteria and decision-making fluctuations
  • Provides realistic solutions as it does not consider extreme ideal points
  • Operates with a difference from average solution instead of distance
  • Free from rank reversal issue
  • In many real-life cases, the average point does not reveal the true picture
  • This method is more suited for risk-neutral cases
ARAS
  • Simple computational steps with lesser complexity
  • Can operate under the compromising situation
  • A relative measurement in terms of the ratio
  • ARAS works reasonably well only when the number of alternatives is limited
MABAC
  • Stability in result
  • Systematic computation with a precise and rational solution
  • Free from rank reversal
  • Can work with large criteria set
  • Does not consider non-compensation of criteria
COPRAS
  • Evaluates influence of maximizing and minimizing criteria separately
  • Simple calculation
  • Free from rank reversal
  • Provides unstable results in case of data variation, and the results may not reveal the true nature of the data
MARCOS
  • Consideration of the anti-ideal and ideal solution at the very beginning of the formation of the decision matrix
  • Determination of utility degree for both solutions,
  • Can work with a large set of criteria and alternatives
  • Stability in solution
  • Works on compromising results
Table 6. List of selection criteria.
Table 6. List of selection criteria.
NatureProfit TypeCost Type
Criteria CPU frequency (GHz)CPU cores (in numbers)GPU frequency (GHz)Total RAM (GB)Available memory (MB)Battery capacity (mAh)Battery available (%)Wi-Fi strength (1–5)CPU load (%)GPU load (%)CPU temp (Co)Battery temp (Co)GPU Architecture (nm)
CodeC1C2C3C4C5C6C7C8C9C10C11C12C13
Effect direction(+)(+)(+)(+)(+)(+)(+)(+)(−)(−)(−)(−)(−)
Table 7. Decision matrix (Case 1).
Table 7. Decision matrix (Case 1).
SMDProfit CriteriaCost Criteria
C1C2C3C4C5C6C7C8C9C10C11C12C13
M12.22650889527001549227434514
M21.544504383140003941676394010
M31.526506269427001234467384028
M41.38650851840001158978424210
M51.38650818073000104138313810
M61.784508198230006856432323514
M72.524006385735001816016383610
M82.54624855840005659987504810
M91.72450819082700574264303428
M102.524506176740002425393454410
M112.524004285340009435347404010
M122.226246353527002432667373928
M132.287104173435005011963343828
M141.58650429543000595153343310
M152.286506191630001111977323914
M161.32400687027009054489354310
M171.544004291135001721896364710
M181.7845063876400063440454210
M191.3465069442700751272304314
M201.72450628554000225629324010
M211.344506297335001817892404514
M221.586248352140002214244383710
M231.344006173435008449524433928
M242.52710439863000161857364028
M251.54624628513500314712394210
M261.747106298330005016158384510
M272.227108193240008735721394314
M282.52624697240008757780434628
M291.32710625794000162690414014
M301.34710635373500372416373728
M312.5265048092700895703413914
M321.34450437693500562535334028
M331.38400479930003916547354410
M342.247104193840001754811364028
M351.38710627553000924148343914
M361.324504266327003015646374110
M372.58450417892700122415323614
M381.3471067593500442660343528
M392.244004174830005859922454410
M401.384508269040005642213333428
M411.58624889835008244722343610
M422.524508368130006252668353728
M431.386248279040001638415373914
M441.38400415823000264180323314
M452.586504262835006949411424028
M462.52400661930005224052413914
M471.324006276027006913138373810
M482.58624816732700295267353628
M491.74650416473000483430343710
M501.384506175340002939164394528
Table 8. Decision matrix (Case 2).
Table 8. Decision matrix (Case 2).
SMDProfitCost
C1C2C3C4C5C6C7C8C9C10C11C12C13
M11.38650818073000104138313810
M102.524506176740002425393454410
M152.286506191630001111977323914
M201.72450628554000225629324010
M251.54624628513500314712394210
M301.34710635373500372416373728
M351.38710627553000924148343914
M401.384508269040005642213333428
M452.586504262835006949411424028
M501.384506175340002939164394528
Table 9. Minimized selection criteria.
Table 9. Minimized selection criteria.
NatureProfitCost
Criteria CPU frequency (GHz)CPU cores (in numbers)Total RAM (GB)Battery capacity (mAh)Battery available (%)CPU load (%)
CodeC1C2C4C6C7C9
Effect direction(+)(+)(+)(+)(+)(−)
Table 10. Decision matrix (Case 3).
Table 10. Decision matrix (Case 3).
SMDProfitCostSMDProfitCost
C1C2C4C6C7C9C1C2C4C6C7C9
M12.2289527001592M261.74298330005061
M21.54383140003916M272.22193240008757
M31.52269427001244M282.5297240008777
M41.3851840001189M291.32257940001669
M51.38180730001013M301.3435373500374
M61.78198230006864M312.5280927008970
M72.52385735001860M321.3437693500565
M82.5455840005699M331.3879930003965
M91.72190827005726M342.24193840001748
M102.52176740002453M351.3827553000921
M112.52285340009453M361.32266327003056
M122.22353527002426M372.5817892700124
M132.28173435005019M381.3475935004466
M141.58295430005915M392.24174830005899
M152.28191630001119M401.38269040005622
M161.3287027009044M411.5889835008247
M171.54291135001718M422.52368130006226
M181.7838764000634M431.38279040001684
M191.349442700752M441.38158230002618
M201.72285540002262M452.58262835006994
M211.34297335001878M462.5261930005240
M221.58352140002242M471.32276027006931
M231.34173435008495M482.58167327002926
M242.5239863000168M491.74164730004843
M251.54285135003171M501.38175340002991
Table 11. Decision matrix (Case 4).
Table 11. Decision matrix (Case 4).
SMDProfitCostSMDProfitCost
C1C2C4C6C7C9C1C2C4C6C7C9
M11.38180730001013M301.3435373500374
M102.52176740002453M351.3827553000921
M152.28191630001119M401.38269040005622
M201.72285540002262M452.58262835006994
M251.54285135003171M501.38175340002991
Table 12. Criteria weights (Case 1).
Table 12. Criteria weights (Case 1).
Criteria(+)(+)(+)(+)(+)(+)(+)(+)(−)(−)(−)(−)(−)
C1C2C3C4C5C6C7C8C9C10C11C12C13
Hj0.84360.85560.89850.88620.94560.89980.91780.91280.94980.95520.98160.96960.8996
wj0.14420.13320.09360.10500.05010.09240.07580.08040.04630.04140.01700.02810.0926
Table 13. Ranking results of EDAS method (Case 1).
Table 13. Ranking results of EDAS method (Case 1).
SMDSPSNNSPNSNASRank
M10.1370.2270.4230.2560.34035
M20.1450.1460.4460.5210.48425
M30.0310.2690.0960.1170.10650
M40.2510.2240.7710.2660.51821
M50.2770.1170.8520.6160.73430
M60.2460.0570.7580.8110.7857
M70.1650.2170.5080.2890.3985
M80.2300.1740.7080.4290.56832
M90.1460.1880.4500.3830.41615
M100.1150.2410.3540.2110.28344
M110.2100.1570.6480.4860.56716
M120.0980.2250.3000.2610.28145
M130.1950.1870.6010.3860.49323
M140.3110.0660.9570.7820.8702
M150.1900.1700.5830.4440.51433
M160.1680.2470.5170.1890.35322
M170.0860.2460.2650.1930.22934
M180.3250.0301.0000.9020.95147
M190.1320.1990.4080.3460.3771
M200.1550.1560.4760.4890.48226
M210.0390.2720.1200.1100.11548
M220.2330.1230.7180.5970.65811
M230.1120.2100.3440.3120.32837
M240.1620.3050.4990.0000.25046
M250.1320.0940.4060.6920.54941
M260.0920.1310.2830.5690.42618
M270.2210.1000.6800.6720.67629
M280.2090.2490.6440.1840.41410
M290.1110.2180.3430.2840.31431
M300.1310.1640.4030.4640.43328
M310.2510.1850.7720.3920.58214
M320.1050.2020.3240.3390.33136
M330.1310.2360.4030.2260.31540
M340.1560.1710.4800.4400.46027
M350.2980.0590.9190.8060.86224
M360.0480.2830.1460.0700.1083
M370.2380.1630.7320.4650.59949
M380.0790.2040.2430.3300.28713
M390.1590.1590.4900.4780.48443
M400.2590.1190.7960.6100.7038
M410.2920.0540.8970.8230.8604
M420.2290.1970.7050.3530.52919
M430.2140.1290.6600.5770.61912
M440.2080.1550.6390.4920.56617
M450.2730.1450.8390.5240.68220
M460.0940.1940.2890.3650.3279
M470.1100.2150.3390.2960.31738
M480.3060.1190.9410.6110.77639
M490.1070.0870.3300.7160.5236
M500.1130.2360.3470.2270.28742
Table 14. Ranking results of ARAS method (Case 1).
Table 14. Ranking results of ARAS method (Case 1).
SMDØRank
M10.01700.468238
M20.01870.514829
M30.01440.396749
M40.02020.556418
M50.02200.607519
M60.02190.60329
M70.01750.483610
M80.02110.582733
M90.01990.550512
M100.01690.466039
M110.01950.538220
M120.01630.450442
M130.01890.520826
M140.02640.72792
M150.01880.518113
M160.01750.483627
M170.01590.440034
M180.02420.668845
M190.02110.58104
M200.01910.526224
M210.01490.411448
M220.02040.563516
M230.01740.480536
M240.01640.451541
M250.02520.696447
M260.01800.49733
M270.02040.563630
M280.01900.524515
M290.01510.417325
M300.01940.535622
M310.02320.63905
M320.01760.486032
M330.01660.459140
M340.01870.514828
M350.03140.867823
M360.01390.38451
M370.02090.577750
M380.01530.422614
M390.01910.527146
M400.02120.585911
M410.02280.62927
M420.01940.535921
M430.02030.561617
M440.01760.486631
M450.02220.612235
M460.01610.44558
M470.01600.442143
M480.02300.633544
M490.01740.48156
M500.01730.478837
Table 15. Ranking results of MABAC method (Case 1).
Table 15. Ranking results of MABAC method (Case 1).
SMDSum (Si)Rank
M10.0319527
M20.0314728
M3−0.1544449
M40.1669413
M50.1763336
M60.1887110
M70.043628
M80.2290725
M9−0.035333
M100.0388026
M110.1017216
M12−0.0439739
M130.0862620
M140.184299
M150.1183241
M16−0.0897215
M17−0.1126343
M180.2486645
M19−0.051842
M200.0673424
M21−0.1342148
M220.205666
M23−0.0894542
M24−0.0422137
M250.0817633
M260.0308122
M270.2286330
M280.096645
M290.0004717
M300.0029032
M310.0823021
M32−0.1185046
M33−0.1088344
M340.0898619
M350.1931031
M36−0.220827
M370.0787050
M38−0.0470323
M390.0080140
M400.1480814
M410.259001
M420.0949418
M430.1750311
M44−0.0039734
M450.1710029
M46−0.0227612
M47−0.1259835
M480.2286947
M490.031124
M50−0.0426338
Table 16. Ranking results of COPRAS method (Case 1).
Table 16. Ranking results of COPRAS method (Case 1).
SMDQURank
M10.017964.911737
M20.019771.093427
M30.015556.097348
M40.020473.935521
M50.024588.608231
M60.023484.72605
M70.018868.01326
M80.021376.917132
M90.018868.015315
M100.017362.497845
M110.020774.990118
M120.017563.394543
M130.020172.765822
M140.026595.76982
M150.020072.503535
M160.018165.511524
M170.016559.515236
M180.0276100.000047
M190.018366.30931
M200.019971.894525
M210.015556.052449
M220.021979.370112
M230.018466.471234
M240.017061.450246
M250.020674.479841
M260.018968.244220
M270.022179.856130
M280.020172.762810
M290.017663.532123
M300.019068.702529
M310.021075.942216
M320.017864.247439
M330.017563.473242
M340.019570.485528
M350.024689.157826
M360.014954.02594
M370.022079.760050
M380.017362.528211
M390.019771.097544
M400.022581.21789
M410.024789.32693
M420.020774.871619
M430.021477.256914
M440.021778.652613
M450.022782.227117
M460.017663.84708
M470.017864.370040
M480.023484.642038
M490.020975.59577
M500.018466.477333
Table 17. Ranking results of MARCOS method (Case 1).
Table 17. Ranking results of MARCOS method (Case 1).
SMDf(Ki)f(Ki+)f(Ki)Rank
M10.225250.774750.5663921
M20.225250.774750.4492836
M30.225250.774750.4689834
M40.225250.774750.714218
M50.225250.774750.5248333
M60.225250.774750.6615327
M70.225250.774750.4315114
M80.225250.774750.8539540
M90.225250.774750.483263
M100.225250.774750.5486923
M110.225250.774750.5484824
M120.225250.774750.5756119
M130.225250.774750.710499
M140.225250.774750.5150629
M150.225250.774750.5898844
M160.225250.774750.3534218
M170.225250.774750.3234245
M180.225250.774750.6407347
M190.225250.774750.3730916
M200.225250.774750.4610135
M210.225250.774750.4107642
M220.225250.774750.6409715
M230.225250.774750.5669220
M240.225250.774750.5492022
M250.225250.774750.5049337
M260.225250.774750.5017630
M270.225250.774750.7410531
M280.225250.774750.861935
M290.225250.774750.446992
M300.225250.774750.5449326
M310.225250.774750.5458625
M320.225250.774750.4242141
M330.225250.774750.3149948
M340.225250.774750.7069310
M350.225250.774750.6337332
M360.225250.774750.1585117
M370.225250.774750.4464250
M380.225250.774750.5234338
M390.225250.774750.4899028
M400.225250.774750.716457
M410.225250.774750.6755913
M420.225250.774750.731766
M430.225250.774750.6785012
M440.225250.774750.3330446
M450.225250.774750.8701943
M460.225250.774750.435411
M470.225250.774750.2228639
M480.225250.774750.8255849
M490.225250.774750.376534
M500.225250.774750.6797711
Table 18. Comparative analysis of the rankings by different MCDM methods (Case 1).
Table 18. Comparative analysis of the rankings by different MCDM methods (Case 1).
SMDRanking ResultsFinal Rank (SAW)
EDASARASMABACCOPRASMARCOS
M1353827372133
M2252928273627
M3504949483448
M421181321814
M5301936313331
M6791052710
M751086147
M8323325324032
M9151231538
M10443926452335
M11162016182421
M12454239431938
M1323262022920
M142292294
M15331341354436
M16222715241822
M17343443364543
M18474545474747
M191421161
M20262424253524
M21484848494249
M2211166121512
M23373642342037
M24464137462240
M25414733413741
M2618322203016
M27293030303130
M28101551059
M2931251723218
M30282232292626
M3114521162515
M32363246394144
M33404044424845
M34272819281023
M35242331263225
M363174172
M37495050505050
M38131423113819
M39434640442842
M40811149711
M414713133
M4219211819617
M43121711141213
M44173134134629
M45203529174328
M469812816
M47384335403939
M48394447384946
M49664745
M50423738331134
Table 19. Correlation test I (Case 1).
Table 19. Correlation test I (Case 1).
CoefficientFinal RankEDAS RankARAS RankMABAC RankCOPRAS RankMARCOS Rank
Kendall’s tauSAW_Rank0.817 **0.778 **0.829 **0.830 **0.510 **
Spearman’s rhoSAW_Rank0.947 **0.917 **0.960 **0.951 **0.704 **
** Correlation is significant at the 0.01 level (2-tailed).
Table 20. Criteria weights (Case 2).
Table 20. Criteria weights (Case 2).
Criteria(+)(+)(+)(+)(+)(+)(+)(+)(−)(−)(−)(−)(−)
C1C2C3C4C5C6C7C8C9C10C11C12C13
Hj0.62960.87160.77320.93190.81270.82250.82020.91970.87440.91200.91810.90150.7753
wj0.18180.06300.11130.03340.09190.08710.08820.03940.06170.04320.04020.04840.1103
Table 21. Criteria weights (Case 3).
Table 21. Criteria weights (Case 3).
Criteria(+)(+)(+)(+)(+)(−)
C1C2C4C6C7C9
Hj0.84360.85560.94560.89980.91780.9498
wj0.26600.24570.09250.17050.13980.0854
Table 22. Criteria weights (Case 4).
Table 22. Criteria weights (Case 4).
Criteria(+)(+)(+)(+)(+)(−)
C1C2C4C6C7C9
Hj0.62960.87160.81270.82250.82020.8744
wj0.31690.10980.16020.15190.15380.1075
Table 23. Comparative analysis of the ranking by different MCDM methods (Case 2).
Table 23. Comparative analysis of the ranking by different MCDM methods (Case 2).
SMDComparative RankingFinal Rank (SAW)
EDASARASMABACCOPRASMARCOS
M1356264
M10989959
M15893877
M20744645
M255255106
M30678788
M35111131
M40467412
M45232393
M5010101010210
Table 24. Comparative analysis of the ranking by different MCDM methods (Case 3).
Table 24. Comparative analysis of the ranking by different MCDM methods (Case 3).
SMDRanking ResultsFinal Rank (SAW)
EDASARASMABACCOPRASMARCOS
M1504846484250
M216232123413
M3485049502246
M4413429345044
M5404244423243
M6202732263429
M710918112515
M832332033318
M926209204831
M10383619363533
M111112414137
M1236403740823
M134636304
M1457157106
M152444134430
M16131611162616
M17434448434649
M18343933371428
M19122222
M20444535451835
M21464342441239
M22121481575
M23373040303736
M2422242424114
M25494745472445
M26423838391734
M27303134311122
M28171912182921
M29191810194024
M3021153112917
M31282827284337
M321513269611
M33272936294540
M34293216322726
M35312625273332
M3621141191
M37474950492347
M388564289
M39454643464748
M406878208
M4191013104119
M4214171717510
M43232122211620
M44182530253927
M45333539353838
M463315153
M47353728384942
M48394147412141
M497115133112
M50252223223625
Table 25. Comparative analysis of the ranking by different MCDM methods (Case 4).
Table 25. Comparative analysis of the ranking by different MCDM methods (Case 4).
SMDComparative RankingFinal Rank (SAW)
EDASARASMABACCOPRASMARCOS
M19101010810
M10652523
M15565636
M207878108
M25878777
M30446344
M35114111
M40333495
M45221252
M501099969
Table 26. Correlation test II (Case 2).
Table 26. Correlation test II (Case 2).
CoefficientFinal RankEDAS RankARAS RankMABAC RankCOPRAS RankMARCOS Rank
Kendall’s tauSAW_Rank0.778 **0.556 *0.556 *0.778 **0.067
Spearman’s rhoSAW_Rank0.903 **0.758 *0.709 *0.927 **0.139
** Correlation is significant at the 0.01 level (2-tailed). * Correlation is significant at the 0.05 level (2-tailed).
Table 27. Correlation test III (Case 3).
Table 27. Correlation test III (Case 3).
CoefficientFinal RankEDAS RankARAS RankMABAC RankCOPRAS RankMARCOS Rank
Kendall’s tauSAW_Rank0.763 **0.701 **0.659 **0.700 **0.407 **
Spearman’s rhoSAW_Rank0.917 **0.870 **0.840 **0.866 **0.585 **
** Correlation is significant at the 0.01 level (2-tailed).
Table 28. Correlation test IV (Case 4).
Table 28. Correlation test IV (Case 4).
CoefficientFinal RankEDAS RankARAS RankMABAC RankCOPRAS RankMARCOS Rank
Kendall’s tauSAW_Rank0.733 **0.867 **0.733 **0.911 **0.511 *
Spearman’s rhoSAW_Rank0.891 **0.952 **0.867 **0.964 **0.685 *
** Correlation is significant at the 0.01 level (2-tailed). * Correlation is significant at the 0.05 level (2-tailed).
Table 29. Interchange of criteria weights for sensitivity analysis (Case 1).
Table 29. Interchange of criteria weights for sensitivity analysis (Case 1).
CriteriaCriteria Weights under Different Experimental Cases
OriginalExp1Exp2Exp3Exp4
C10.14419640.01697980.05013010.14419640.1441964
C20.13317630.13317630.13317630.13317630.1331763
C30.09364090.09364090.09364090.09364090.0936409
C40.10497680.10497680.10497680.10497680.1049768
C50.05013010.05013010.14419640.05013010.0925919
C60.09243980.09243980.09243980.09243980.0924398
C70.07579970.07579970.07579970.07579970.0757997
C80.08038560.08038560.08038560.08038560.0803856
C90.04626960.04626960.04626960.04626960.0462696
C100.04135770.04135770.04135770.04135770.0413577
C110.01697980.14419640.01697980.09259190.0169798
C120.02805550.02805550.02805550.02805550.0280555
C130.09259190.09259190.09259190.01697980.0501301
Table 30. Interchange of criteria weights for sensitivity analysis (Case 2).
Table 30. Interchange of criteria weights for sensitivity analysis (Case 2).
CriteriaCriteria Weights under Different Experimental Cases
OriginalExp1Exp2Exp3Exp4
C10.18182990.11129960.03341310.11029840.1818299
C20.0630140.0630140.0630140.0630140.063014
C30.11129960.18182990.11129960.11129960.1112996
C40.03341310.03341310.18182990.03341310.0334131
C50.09193740.09193740.09193740.09193740.0919374
C60.08714340.08714340.08714340.08714340.0871434
C70.08824540.08824540.08824540.08824540.0882454
C80.03942490.03942490.03942490.03942490.0394249
C90.0616680.0616680.0616680.0616680.061668
C100.04318810.04318810.04318810.04318810.0431881
C110.04018550.04018550.04018550.04018550.1102984
C120.04835210.04835210.04835210.04835210.0483521
C130.11029840.11029840.11029840.18182990.0401855
Table 31. Interchange of criteria weights for sensitivity analysis (Case 3).
Table 31. Interchange of criteria weights for sensitivity analysis (Case 3).
CriteriaCriteria Weights under Different Experimental Cases
OriginalExp1Exp2Exp3Exp4
C10.26600.08540.09250.26600.2660
C20.24570.24570.24570.24570.1705
C40.09250.09250.26600.08540.0925
C60.17050.17050.17050.17050.2457
C70.13980.13980.13980.13980.1398
C90.08540.26600.08540.09250.0854
Table 32. Interchange of criteria weights for sensitivity analysis (Case 4).
Table 32. Interchange of criteria weights for sensitivity analysis (Case 4).
CriteriaCriteria Weights under Different Experimental Cases
OriginalExp1Exp2Exp3Exp4
C10.31686610.10746590.10981150.31686610.3168661
C20.10981150.10981150.31686610.10746590.1098115
C40.16021490.16021490.16021490.16021490.1518606
C60.15186060.15186060.15186060.15186060.1602149
C70.1537810.1537810.1537810.1537810.153781
C90.10746590.31686610.10746590.10981150.1074659
Table 33. Correlation test V (sensitivity analysis—Case 1).
Table 33. Correlation test V (sensitivity analysis—Case 1).
CoefficientMethodScenarioExp1Exp2Exp3Exp4
Kendall’s tauEDASOriginal0.789 **0.729 **0.799 **0.824 **
ARAS0.812 **0.781 **0.868 **0.896 **
MABAC0.616 **0.749 **0.780 **0.882 **
COPRAS0.799 **0.755 **0.827 **0.874 **
MARCOS0.734 **0.752 **0.796 **0.881 **
Spearman’s rhoEDASOriginal0.932 **0.892 **0.938 **0.952 **
ARAS0.948 **0.936 **0.971 **0.981 **
MABAC0.816 **0.914 **0.935 **0.979 **
COPRAS0.939 **0.910 **0.950 **0.973 **
MARCOS0.905 **0.914 **0.945 **0.974 **
** Correlation is significant at the 0.01 level (2-tailed).
Table 34. Correlation test VI (sensitivity analysis—Case 2).
Table 34. Correlation test VI (sensitivity analysis—Case 2).
CoefficientMethodScenarioExp1Exp2Exp3Exp4
Kendall’s tauEDASOriginal0.911 **0.733 **0.689 **0.867 **
ARAS0.778 **0.689 **0.956 **0.733 **
MABAC0.556 *0.2000.556 *0.600 *
COPRAS0.911 **0.689 **0.867 **0.778 **
MARCOS0.511 *0.1110.556 *0.867 **
Spearman’s rhoEDASOriginal0.976 **0.806 **0.806 **0.939 **
ARAS0.903 **0.806 **0.988 **0.879 **
MABAC0.709 *0.3700.758 *0.745 *
COPRAS0.964 **0.830 **0.939 **0.915 **
MARCOS0.673 *0.2120.661 *0.964 **
** Correlation is significant at the 0.01 level (2-tailed). * Correlation is significant at the 0.05 level (2-tailed).
Table 35. Correlation test VII (sensitivity analysis—Case 3).
Table 35. Correlation test VII (sensitivity analysis—Case 3).
CoefficientMethodScenarioExp1Exp2Exp3Exp4
Kendall’s tauEDASOriginal0.665 **0.685 **0.980 **0.863 **
ARAS0.767 **0.706 **0.985 **0.878 **
MABAC0.615 **0.628 **0.976 **0.830 **
COPRAS0.778 **0.719 **0.982 **0.879 **
MARCOS0.946 **0.956 **1.000 **0.979 **
Spearman’s rhoEDASOriginal0.844 **0.863 **0.998 **0.964 **
ARAS0.923 **0.870 **0.999 **0.974 **
MABAC0.799 **0.811 **0.998 **0.956 **
COPRAS0.926 **0.880 **0.998 **0.974 **
MARCOS0.992 **0.994 **1.000 **0.998 **
** Correlation is significant at the 0.01 level (2-tailed).
Table 36. Correlation test VIII (sensitivity analysis—Case 4).
Table 36. Correlation test VIII (sensitivity analysis—Case 4).
CoefficientMethodScenarioExp1Exp2Exp3Exp4
Kendall’s tauEDASOriginal0.600 *0.600 *1.000 **1.000 **
ARAS0.600 *0.556 *1.000 **1.000 **
MABAC0.556 *0.2891.000 **1.000 **
COPRAS0.556 *0.511 *1.000 **1.000 **
MARCOS1.000 **0.867 **1.000 **1.000 **
Spearman’s rhoEDASOriginal0.709 *0.770 **1.000 **1.000 **
ARAS0.745 *0.685 *1.000 **1.000 **
MABAC0.709 *0.3451.000 **1.000 **
COPRAS0.721 *0.673 *1.000 **1.000 **
MARCOS1.000 **0.952 **1.000 **1.000 **
** Correlation is significant at the 0.01 level (2-tailed). * Correlation is significant at the 0.05 level (2-tailed).
Table 37. Time complexity and runtimes for each MCDM method under various considerations.
Table 37. Time complexity and runtimes for each MCDM method under various considerations.
MethodTime ComplexityCaseAverage Runtime on Laptop
(Milliseconds)
Average Runtime on Smartphone
(Milliseconds)
Best CaseAverage CaseWorst CaseData in MemoryData in Secondary StorageData in MemoryData in Phone Storage
Entropy (criteria weight calculation)Ω(m + n)θ(mn)O(mn)Case 10.28391135.10610.695461.16032
Case 20.08841125.03970.175810.36809
Case 30.12917124.26960.345420.73407
Case 40.0623483.455120.095230.28998
EDASΩ(m + n)θ(mn)O(mn)Case 10.36754124.501582.021362.46483
Case 20.0899365.932220.421060.63313
Case 30.1674867.900120.979381.36073
Case 40.0687454.862960.228480.39752
ARASΩ(mn)θ(mn)O(mn)Case 10.30266139.129750.870011.32013
Case 20.0691865.646500.227110.41631
Case 30.0878962.646610.447340.80465
Case 40.0430349.420350.126720.30301
MABACΩ(m + n)θ(mn)O(mn)Case 10.27496118.529081.039901.50524
Case 20.090464.173730.267520.45166
Case 30.1187066.008920.530940.90594
Case 40.0715652.624660.149140.34052
COPRASΩ(m + n)θ(mn)O(mn)Case 10.12264122.959530.613471.05754
Case 20.0407664.353270.135210.34481
Case 30.0559764.290610.328440.69645
Case 40.0305850.045890.083340.25656
MARCOSΩ(mn)θ(mn)O(mn)Case 10.30410127.742450.856341.29126
Case 20.0695564.848790.211060.40832
Case 30.0989864.222480.441860.81885
Case 40.0448753.292810.122590.29045
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Pramanik, P.K.D.; Biswas, S.; Pal, S.; Marinković, D.; Choudhury, P. A Comparative Analysis of Multi-Criteria Decision-Making Methods for Resource Selection in Mobile Crowd Computing. Symmetry 2021, 13, 1713. https://doi.org/10.3390/sym13091713

AMA Style

Pramanik PKD, Biswas S, Pal S, Marinković D, Choudhury P. A Comparative Analysis of Multi-Criteria Decision-Making Methods for Resource Selection in Mobile Crowd Computing. Symmetry. 2021; 13(9):1713. https://doi.org/10.3390/sym13091713

Chicago/Turabian Style

Pramanik, Pijush Kanti Dutta, Sanjib Biswas, Saurabh Pal, Dragan Marinković, and Prasenjit Choudhury. 2021. "A Comparative Analysis of Multi-Criteria Decision-Making Methods for Resource Selection in Mobile Crowd Computing" Symmetry 13, no. 9: 1713. https://doi.org/10.3390/sym13091713

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop