Next Article in Journal
Non-Overlapping Domain Decomposition via BURA Preconditioning of the Schur Complement
Previous Article in Journal
Quantile-Wavelet Nonparametric Estimates for Time-Varying Coefficient Models
Previous Article in Special Issue
Analysis of Modified Kies Exponential Distribution with Constant Stress Partially Accelerated Life Tests under Type-II Censoring
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

An Evidential Software Risk Evaluation Model

1
College of Information and Engineering, Kunming University, Kunming 650214, China
2
Key Laboratory of Data Governance and Intelligent Decision, Universities of Yunnan, Kunming 650214, China
3
Institute of Fundamental and Frontier Science, University of Electronic Science and Technology of China, Chengdu 610054, China
*
Authors to whom correspondence should be addressed.
Mathematics 2022, 10(13), 2325; https://doi.org/10.3390/math10132325
Submission received: 9 April 2022 / Revised: 19 June 2022 / Accepted: 29 June 2022 / Published: 2 July 2022
(This article belongs to the Special Issue Probability and Statistics in Quality and Reliability Engineering)

Abstract

:
Software risk management is an important factor in ensuring software quality. Therefore, software risk assessment has become a significant and challenging research area. The aim of this study is to establish a data-driven software risk assessment model named DDERM. In the proposed model, experts’ risk assessments of probability and severity can be transformed into basic probability assignments (BPAs). Deng entropy was used to measure the uncertainty of the evaluation and to calculate the criteria weights given by experts. In addition, the adjusted BPAs were fused using the rules of Dempster–Shafer evidence theory (DST). Finally, a risk matrix was used to get the risk priority. A case application demonstrates the effectiveness of the proposed method. The proposed risk modeling framework is a novel approach that provides a rational assessment structure for imprecision in software risk and is applicable to solving similar risk management problems in other domains.

1. Introduction

In today’s world, software is becoming more and more important in economy, healthcare, society, and other aspects of life [1,2,3]. In order to ensure the quality, reliability and stability of software, many methods have been proposed [4,5]. Software project risk management has become the focus of attention. A risk is an uncertain event or condition that may or may not occur. If it happens, the positive or negative impact effect on one or more objectives. Project teams endeavor to identify and evaluate known and emergent risks throughout the software life cycle. However, risk factors are so complex and uncertain that the effective identification and evaluation of them is still an open issue.
Most research has focused on risk analysis and risk management [6,7,8]. Different theories and techniques have been used to deal with risks. For example, a scheme to incorporate Bayesian belief networks in software project risk management presented by Fanv et al. [9]. Hu et al. [10] proposed a model using BNs with causality constraints for risk analysis of software development projects. Odzaly et al. [11] described the underlying risk management model in an Agile risk tool. An assessment model based on the combination of backpropagation neural network (BPNN) and rough set theory (RST) was put forward by Li et al. [12]. A computational model for the reduction of the probability of project failure through the prediction of risks was proposed by Filippetto et al. [13].
Among these methods, expert opinion and judgment are usually used to perform a risk assessment. One method of making statistical inferences and decisions based on expert opinion and judgment is expert knowledge elicitation. The related literature is extensive [14,15,16]. Since most experts prefer to express their opinions in a qualitative way, the expert elicitation procedure is full of subjectivity and uncertainty. It is important to provide a reliable framework for dealing with expert judgments [17]. Fuzzy set theory [18] is a common and accepted way of dealing with expert opinion and judgment. However, it has several flaws in expressing the hesitancy among several risk values. In some cases, due to a lack of experience or information, experts may not use a single risk level to describe their assessment, but rather hesitate between several risk values. For example, hesitation is between possible and very possible. The traditional fuzzy evaluation matrix cannot express the hesitation in the assessment. Furthermore, forming a fuzzy evaluation matrix takes a lot of time [19]. The fuzzy matrix needs to adjust assignment several times. The fuzzy matrix becomes more complex as the number of risk factors increases. Furthermore, it is not very efficient when dealing with complex fuzzy arithmetic operations. Therefore, it is necessary to propose a method that can efficiently represent multiple subsets without using matrices.
To solve these problems, a data-driven risk assessment model, based on Dempster–Shafer evidence theory (DST) [20,21], Deng entropy [22] and risk matrix [23], is proposed. Due to effectively deal with uncertain information, DST is widely used in decision-making [24,25,26], risk analysis [27], information fusion [28,29], uncertainty measurements [30], fault diagnosis [31,32,33], time-series [34], IoT applications [35] and many other fields [36,37]. Since most experts prefer to express their opinions with linguistic information, such as good, better, best, bad, worse, worst, DST can effectively deal with uncertain information about linguistic expressions involved in risk evaluation [38,39]. At the same time, it can also deal with the situation of multiple subsets due to the hesitation of experts. In some methods, the weights of experts or attributes are artificially assigned or not taken into account, which may lead to subjective results. Hence, expert assessments need to be adjusted reasonably. In the DST framework, Deng entropy is one of the most useful methods to measure uncertainty, and it has been applied in multi-sensor information fusion [40,41], decision-making [42,43], risk and reliability assessment [44] and other applications. The higher the Deng entropy, the more uncertainty there is. Therefore, in our proposed model, Deng entropy is used to measure the uncertain degree of assessments, and then obtain objective criteria weights given by experts. In addition, probability and severity are two critical descriptors of risk. As a convenient and efficient risk assessment tool, the risk matrix method has been widely used in the engineering field. It can be used to make a comprehensive assessment of probability and severity [45,46,47].
The main contributions of this paper are as follows:
(1)
Combining qualitative and quantitative factors at different scales. Various scales are used in the assessment, which are in line with real-world scenarios and help the experts to effectively express their opinion.
(2)
A method based on DST and Deng entropy is used for software risk assessment, which can adjust the expert assessment value and deal with the conflicting value in the assessment. This method makes the assessment more objective.
(3)
Development of a data-driven software risk evaluation model, which is an integration of DST, Deng entropy and risk matrix. The data can be converted into BPAs as soon as the experts give their evaluation values. After that, the BPAs are weighted and fused. Finally, risk rankings are obtained and provide information for risk decisions. It is effective not only in measuring uncertainty measures, but also in expert evaluation conflict handling and expert evaluation opinion integration.
This paper is structured as follows: In Section 2, basic definitions and operations of DST and Deng entropy are reviewed. In Section 3, the method of risk assessment framework is illustrated and the knowledge of software risk identification and risk assessment is introduced. An evidential model based on DST—the Deng entropy risk matrix—is proposed. In Section 4, a case application is shown. The results and discussion are presented in Section 5, and conclusions are drawn in Section 6.

2. Preliminaries

Dealing with uncertainty is still an open issue [48]. Various methods have been proposed, such as intuitionistic fuzzy values [49,50,51] and others [52]. These methods were applied in medical analysis, clustering, network congestion alleviating, search engine optimization techniques, etc. [53,54,55]. As an effective method for handling uncertainty, Dempster–Shafer evidence theory is well studied [56]. Some basic concepts and operations are presented in this section.

2.1. Dempster–Shafer Evidence Theory

Dempster–Shafer evidence theory (DST) was proposed by Dempster [20] and then expanded by Shafer [21]. The basic concepts are described below.
Definition 1.
Let Θ be a finite set with n exclusive and exhaustive elements, Θ = { θ 1 , θ 1 , θ 1 , , θ n }. The Θ is called the framework of identification (FOD). The power set of Θ consists of 2 Θ elements, which is denoted as,
2 Θ = { , { θ 1 } , { θ 2 } , , { θ n } , { θ 1 θ 2 } , , { θ 1 θ 2 θ 3 } , , Θ } .
Definition 2.
m is a mass function or a basic probability assignment (BPA), which satisfies m : 2 Θ [ 0 , 1 ] , with the following constraints:
A 2 Θ m ( A ) = 1 m ( ) = 0
If m ( A ) > 0, then A is called a focal element. m ( A ) represents the belief value supporting evidence A. For more information about mass function, please refer to [57,58].
Definition 3.
For a proposition A Θ , the belief function B e l : 2 Θ [ 0 , 1 ] is defined as,
B e l ( A ) = B A m ( B ) .
The plausibility function Pl: 2 Θ [ 0 , 1 ] is defined as,
P l ( A ) = B A m ( B ) = 1 B e l ( A ¯ ) .
[Bel(A), Pl(A)] is the belief interval of the proposition A.
Definition 4.
Given the two BPAs indicated by m 1 and m 2 , their combination m 1     m 2 is mathematically defined as [21],
m ( A ) = 1 1 k B C = A m 1 ( B ) m 2 ( C ) , A 0 , A =
where A , B , C 2 Θ ,
k = B C = m 1 ( B ) m 2 ( C ) .
Conflict coefficient is k, which indicates the conflict degree between two BPAs. When k = 0 means m 1 is consistent with m 2 , and when k = 1 means m 1 totally contradicts m 2 , that is, the two pieces of evidence strongly support different hypotheses, and these hypotheses are incompatible [59].
Example 1.
Suppose Θ = {A, B}, two BPAs as follows,
m 1 : m 1 ( A ) = 0.6 , m 1 ( B ) = 0.2 , m 1 ( θ ) = 0.2 m 2 : m 2 ( A ) = 0.1 , m 2 ( B ) = 0.7 , m 2 ( θ ) = 0.2
As shown in Example 1, it can be see that the value of m 1 support the object A and m 2 support the object B, while m 1 ( A ) = 0.6 and m 2 ( B ) = 0.7 . They also have multiple objects, while m 1 ( θ ) = 0.2 and m 2 ( θ ) = 0.2 . According to the Equations (5) and (6),
k = m 1 ( A ) × m 2 ( B ) + m 1 ( B ) × m 2 ( A ) = 0.6 × 0.7 + 0.2 × 0.1 = 0.42 + 0.02 = 0.44
m ( A ) = m 1 ( A ) × m 2 ( A ) + m 1 ( A ) × m 2 ( θ ) + m 2 ( A ) × m 1 ( θ ) 1 k = 0.6 × 0.1 + 0.6 × 0.2 + 0.1 × 0.2 0.56 = 0.3571 m ( B ) = m 1 ( B ) × m 2 ( B ) + m 1 ( B ) × m 2 ( θ ) + m 2 ( B ) × m 1 ( θ ) 1 k = 0.2 × 0.7 + 0.2 × 0.2 + 0.2 × 0.7 0.56 = 0.5714 m ( θ ) = m 1 ( θ ) × m 2 ( θ ) 1 k = 0.2 × 0.2 0.56 = 0.0714

2.2. Deng Entropy

Entropy is widely used in many fields, such as politics, economics, sociology, informatics, etc. There are many studies on entropy [60,61].
Definition 5.
Deng entropy is proposed in [22], which is a measure of uncertainty. Furthermore, it has been further improved in dealing with information volume [40,62]. It can be described as,
E d ( m ) = A X m ( A ) l o g 2 m ( A ) 2 | A | 1 .
where m is a BPA defined on the FOD, and A is a focal element of m, | A | is the cardinality of A.
Through a simple transformation, it can also be described as
E d ( m ) = A X m ( A ) l o g 2 ( 2 | A | 1 ) A X m ( A ) l o g 2 ( m ( A ) ) .
Deng entropy is an improvement of Shannon entropy. The belief for each focal element in the Deng entropy is divided into 2 | A | 1 . If there is no uncertainty, i.e., | A | = 1 , the Deng entropy can degenerate to the Shannon entropy. Some properties and extension of Deng entropy are discussed in [63]. More research can be found in [64,65,66].
Example 2.
Suppose there is an expert evaluation, the frame of discernment is {A, B}. m(A) = 0.7, which represents the expert’s belief in A. The remaining belief is 0.3, representing the expert’s hesitation. The remaining belief is assigned to {A, B}, m(AB) = 0.3. The Deng entropy of the expert evaluation is calculated by Equation (7).
E d ( m ) = 0.7 l o g 2 0.7 2 1 1 0.3 l o g 2 0.3 2 2 1 = 1.3568
Example 3.
Suppose there is an expert evaluation, the frame of discernment is {A, B, C}. m(A) = 0.7, m(ABC) = 0.3. The Deng entropy of the expert evaluation is calculated by Equation (7).
E d ( m ) = 0.7 l o g 2 0.7 2 1 1 0.3 l o g 2 0.3 2 3 1 = 1.7235
From the above two examples, it can be seen that the value of the Deng entropy is related to the hesitation value of the experts in the evaluation. By using Deng entropy, the degree of uncertainty in evaluation information can be determined so that objective weights can be calculated. Therefore, it is a good method to apply Deng entropy in risk assessment.

3. Methodology

Some methods are introduced in this section. Software risk identification is shown in Section 3.1. Software risk assessment is explained in Section 3.2. An evidential software risk evaluation model DST—Deng entropy risk matrix (DDERM)—is illustrated in Section 3.3.

3.1. Software Risk Identification

Risk identification is the first step in risk management. With the development of technology and the improvement of project management methods, a number of methods have been proposed to identify risks, such as the Delphi method, expert judgment, graphical techniques, brainstorming and other methods. Several risks are discussed in the different studies, as shown in Table 1.
Risk factors always change with the environment. Meanwhile, different software projects have different risks. Based on the literature review discussed in Table 1, as well as on the previous experience of experts and analysis of the actual project situation, the DMs consider four types of risk factors in this project: requirement risks (C1), scheduling and planning risks (C2), organize and manage risks (C3) and personnel risks (C4). Details in Table 2. The software risk breakdown structure is shown in Figure 1. In the requirement risks (C1), there are four most common risks including ambiguous requirements (R1), misunderstanding the requirements risks (R2), frequent requirement changes (R3) and lack of effective requirement change management (R4). Scheduling and planning risks (C2) contains three risks: the plan is too ideal to be realized (S1), too many interruptions (S2), and unspecified project milestones (S3). Lack of resource management (O1) and inadequate project monitoring and controlling (O2) are in the organize and manage risks (C3). Lack of skills and experience (P1) and leave or sick (P2) are the most common personnel risks (C4).
Notably, some of these risk factors may not be independent. For example, lack of skills and experience (P1) may make it easier to misunderstand requirements (R2). However, the more risk factors there are, the more difficult it is to quantitatively measure the relationship between risk factors. Therefore, in the proposed model, the assessed values of the experts are considered to have taken into account the interrelationship between the risk factors.

3.2. Software Risk Assessment

Software risk assessment is to quantify the probability of risk and severity of loss, and then to obtain the overall level of system risk. The definition of risk can be shown as follows [74],
R i s k = P r o b a b i l i t y ( P ) × S e v e r i t y ( S ) .
Probability and severity are two aspects of risk. The levels and linguistic terms for these two aspects are listed in Table 3 and Table 4. There are five levels of probability, from low to high: very unlikely, unlikely, even, possible and very possible. Severity also contains five levels: very little, little, medium, serious and catastrophic. There are four levels of risk, as shown in Table 5.
As a good risk management tool, risk matrix is widely used. According to Equation (9) and the values in Table 3, Table 4 and Table 5, a risk matrix is given in Table 6 and its levels are shown in Figure 2. If the severity of a risk is serious, but the probability of occurrence is very unlikely, the risk is still very low. Furthermore, if the probability is very possible, but the impact is little, the risk is medium. For example, the probability of risk R1 is level 4, and its severity is D. By querying the risk matrix, its risk level is III. Therefore, both factors must be taken into account to determine the risk level.

3.3. DST—Deng Entropy Risk Matrix Model

Good risk management is an important condition to ensure system reliability. For more research on reliability, please refer to [75,76]. A risk assessment model DST—Deng entropy risk matrix (DDERM) is proposed to deal with risk data. This model is composed of the following steps, as illustrated in Figure 3.
Step 1. Each expert makes judgments on each risk in the risk list, including both probability and severity;
Step 2. Transform the assessment results into BPAs;
Step 3. Calculate uncertainty and adjust assignments;
Step 3.1. Calculate the uncertainty.
D E ( P E i ) = A X 1 m P , i ( A ) l o g 2 m P , i ( A ) 2 | A | 1
D E ( S E i ) = B X 2 m S , i ( B ) l o g 2 m S , i ( B ) 2 | A | 1
where represents different risk factors, such as R1, R2, S1, O1, O2, P1, and so on. For each , there are n experts’ assigned values. E i is the assignment of the i-th expert. X 1 is the frame of discernment of probability, and X 1 = {1, 2, 3, 4, 5}. X 2 is the frame of discernment of severity, and X 2 = {A, B, C, D, E}.
Step 3.2. Calculate w using Equations (12) and (13).
w P E i = D E ( P E i ) i = 1 n D E ( P E i )
w S E i = D E ( S E i ) i = 1 n D E ( S E i )
Step 3.3. Modify each BPA.
m w P ( A ) = i = 1 n w P E i m P , i ( A )
m w S ( B ) = i = 1 n w S E i m S , i ( B )
Step 4. Fusion of the adjusted BPAs using DS rules. If there are n experts, n − 1 times of fusion are performed.
m P ( A ) = ( m w P m w P ) ( A ) = A X 1 1 1 k m w P ( A ) m w P ( A )
m S ( B ) = ( m w S m w S ) ( B ) = B X 2 1 1 k m w S ( B ) m w S ( B )
where k is conflicting factors.
Step 5. According to the result of fusion, the values of probability and severity can be obtained. Meanwhile, the level of probability and severity can also be obtained.
Step 6. According to the risk matrix in Figure 2, risk levels are obtained by using the probability level and the severity level.
Step 7. Get the weight of risk based on risk level, as shown in Table 7.
Step 8. Calculate and get risk prioritization.
R i s k = W i g h t ( R ) × m w P ( A ) × m w S ( B ) .

4. A Case Application

This case is an application of software risk management. Software risk management is closely related to software quality. In recent years, the problem of poor quality software has occurred frequently, seriously affecting the production and lives of people. Therefore, it is necessary to establish an effective risk assessment model for better risk management.
As we know, effectively representing, aggregating and ranking risk factors should be the key issues in risk management. In the application, after risk identification, eleven risk factors are listed in Table 2. Three experts were invited to assess the risk. In order to represent the assessed values effectively, the assessed values are converted to BPAs. Meanwhile, Deng entropy and DST are used to aggregate risks. Finally, the risk matrix is used to calculate the risk ranking.
Step 1. For the eleven key factors in Table 2, three experts were invited to express their opinions on probability and severity according to the levels defined in Table 3 and Table 4. Here, we assume that the experts have the same knowledge weights. Evaluation are given in Table 8. For instance, the risk R1, 4 (80%) means that expert 1 is 80% sure that the probability of risk is “Possible.” Furthermore, D (60%) means that the severity of “Serious” is 60%, which is the estimate given by expert 1.
Step 2. Transform the assessment results into BPAs.
For risk P1, the BPAs of the probability are as follows,
m P P 1 , 1 : m P P 1 , 1 ( 1 ) = 0.4 , m P P 1 , 1 ( 2 ) = 0.6 m P P 1 , 2 : m P P 1 , 2 ( 2 ) = 0.8 , m P P 1 , 2 ( θ ) = 0.2 m P P 1 , 3 : m P P 1 , 3 ( 1 ) = 0.85 , m P P 1 , 3 ( θ ) = 0.15
The BPAs of the severity are as shown below,
m S P 1 , 1 : m S P 1 , 1 ( B ) = 0.25 , m S P 1 , 1 ( C ) = 0.75 m S P 1 , 2 : m S P 1 , 2 ( B ) = 0.75 , m S P 1 , 2 ( θ ) = 0.25 m S P 1 , 3 : m S P 1 , 3 ( C ) = 0.8 , m S P 1 , 2 ( θ ) = 0.2
Since A 2 Θ m ( A ) = 1 , if the BPAs of a risk given by an expert is not equal to 1, assign the remaining value to θ .
Step 3. Calculate uncertainty and adjust assignments.
Step 3.1. Calculate the uncertainty. Still using P1 as an example, base on Equation (10) the uncertainty degree of probability is calculated as follows:
D E ( P P 1 E 1 ) = A X 1 m P P 1 , 1 ( A ) l o g 2 m P P 1 , 1 ( A ) 2 | A | 1 = 0.4 l o g 2 0.4 0.6 l o g 2 0.6 = 0.9710 D E ( P P 1 E 2 ) = A X 1 m P P 1 , 2 ( A ) l o g 2 m P P 1 , 2 ( A ) 2 | A | 1 = 0.8 l o g 2 0.8 0.2 l o g 2 0.2 2 5 1 = 1.7128 D E ( P P 1 E 3 ) = A X 1 m P P 1 , 13 ( A ) l o g 2 m P P 1 , 3 ( A ) 2 | A | 1 = 0.85 l o g 2 0.85 0.15 l o g 2 0.15 2 5 1 = 1.3530
Base on Equation (11) the uncertainty degree of severity is D E ( S P 1 E 1 ) = 0.8113 , D E ( S P 1 E 2 ) = 2.0498 , D E ( S P 1 E 3 ) = 1.7128 .
Step 3.2. The w can be described by Equation (12).
w P P 1 E 1 = D E ( P P 1 E 1 ) i = 1 3 D E ( P P 1 E i ) = 0.2405 w P P 1 E 2 = D E ( P P 1 E 1 ) i = 1 3 D E ( P P 1 E i ) = 0.4243 w P P 1 E 2 = D E ( P P 1 E 1 ) i = 1 3 D E ( P P 1 E i ) = 0.3352
In the same way, use Equation (13) to calculate other w of P1, w S P 1 E 1 = 0.1774 , w S P 1 E 2 = 0.4482 , w S P 1 E 3 = 0.3745 .
Step 3.3. Modify each BPA based on Equations (14) and (15).
m w P P 1 ( 1 ) = i = 1 3 w P P 1 E i × m P P 1 , i ( 1 ) = 0.2405 × 0.4 + 0.3352 × 0.85 = 0.3811 m w P P 1 ( 2 ) = i = 1 3 w P P 1 E i × m P P 1 , i ( 1 ) = 0.2405 × 0.6 + 0.4243 × 0.8 = 0.4838 m w P P 1 ( θ ) = i = 1 3 w P P 1 E i × m P P 1 , i ( 1 ) = 0.4243 × 0.2 + 0.3352 × 0.15 = 0.1351
In the same way, m w S P 1 ( B ) = 0.3805 , m w S P 1 ( C ) = 0.4326 , m w S P 1 ( θ ) = 0.1869 .
Step 4. Fusion of the adjusted BPAs using DS rules by n-1 times based on Equations (16) and (17). Because there are three sets of BPAs, it needs to fused two times.
m P P 1 ( 1 ) = ( m w P P 1 ( 1 ) m w P P 1 ( 1 ) ) m w P P 1 ( 1 ) = 0.3630 m P P 1 ( 2 ) = ( m w P P 1 ( 2 ) m w P P 1 ( 2 ) ) m w P P 1 ( 2 ) = 0.6304 m P P 1 ( θ ) = ( m w P P 1 ( θ ) m w P P 1 ( θ ) ) m w P P 1 ( θ ) = 0.0066 m S P 1 ( B ) = ( m w S P 1 ( B ) m w S P 1 ( B ) ) m w S P 1 ( B ) = 0.4256 m S P 1 ( C ) = ( m w S P 1 ( C ) m w S P 1 ( C ) ) m w S P 1 ( C ) = 0.5887 m S P 1 ( θ ) = ( m w S P 1 ( θ ) m w S P 1 ( θ ) ) m w S P 1 ( θ ) = 0.0158
Step 5. According to the result of fusion, the values of probability and severity can be obtained. Meanwhile, the probability and severity levels can also be obtained. The fuse results are presented in Table 9.
Step 6. According to the risk matrix, the probability level and the severity level can be used to obtain the risk level, as shown in Table 10.
Step 7. Get the weight of risk based on risk level, as shown in Table 7.
Step 8. Calculate and get risk prioritization. Based on Equation (18), calculated the overall value of risk, which is in the last column of Table 10. Then use the largest data as the final value for each risk level as shown in Table 11 and Figure 4. Finally, prioritize the risks.

5. Results and Discussion

In this section, the results are analyzed. Furthermore, to demonstrate the advantages of the proposed method, a discussion is presented in the following.

5.1. Results

According to Table 11 and Figure 4, the results of the proposed risk assessment model DDERM can be seen from two perspectives.
For each risk factor, there may be different risk levels. For example, R1 contains two levels—one is III and the other is IV, while O1 has only one risk level II. The project manager and DM can consider whether to focus on high-level or low-level risk factors, according to the specific situation of the project, the nature of risks, different software life cycle periods, and risk preferences.
For each risk level, the risk factors can be sorted. In level I, R4 > S1 > P1. In level II, O1 > P2 > R3 > S4 > P1 > O2 > S1 > S3 > R4. R2 > S3 > R1 > O2 > R3 > P2 > S2 in level III. Furthermore, R1 > R2 in level IV. This means that, for the highest risk level ’High’, R1 must be focused first, and then R2. R2 is the most important in level ’Significant’. For the risk level ’Medium’, O1 is the most concerned. In ’Low’, R4 needs the most attention. The project manager and DMs can make decisions based on different risk levels.
Based on the results, some explanations are needed. R4 > P1 means “lack of effective requirement change management” presents a higher risk than “lack of skills and experience”. This may be because requirements changes occur frequently, and in this case, there is a significant risk if requirements changes are not managed effectively. At the same time, the members of this project team are all experienced in development, in comparison, the risk of” lack of skills and experience”is less. In addition, R1 > R2, which means that ambiguous requirements are more risky than misunderstood requirements. The result seems unreasonable. However, in this case, the possibility for the requirements analysis to completely misunderstand the requirements may be less than the user’s ambiguity in expressing the requirements, so R1 > R2. The assessed values of the experts are considered to have taken into account the interrelationship between the risk factors.

5.2. Compared with Other Software Risk Assessment Method

In this section, we compare the constructed DDERM model with several existing methods, including Fuzzy set theory and hierarchical structure [68,77], Fuzzy DEMATEL, FMCDM, TODIM approaches [69], DEMATEL, ANFIS MCDM and F-TODIM approaches [70], entropy-based method [67]. The comparison are shown in Table 12. We can draw the following conclusions:
(1)
In [68,77], the weights are given in advance. Thus, weights are static and subjective. In contrast, in the DDERM approach, the weights are measured by the degree of uncertainty in the assessment. The weights are objective and independent, and relate only to the assessment of risk factors. When different experts give different values for the same risk factor, the weights are definitely different. If the same expert evaluates different risks differently, the weights must be distinct. This means that the weight is dependent on how reliable the expert is for that risk. Besides, the evaluations of different experts have no effect on the weights of other experts.
(2)
In [69,70], multiple modifications to the judgment matrix are frequently required because the judgment matrix created during the evaluation process is not completely consistent. The judgment matrix needs to be modified more than 4 times. In the DDERM approach, there is no need to establish and repeatedly adjust the matrix. Only expert evaluation and risk matrix are required. At the same time, DDERM has the advantage of DST in expressing uncertain information, which effectively assigns risk value to multi subsets.
(3)
Using entropy-based method for software risk assessment in [67]. However, it does not effectively solve the problem of conflicting information in the assessment. Similarly, it is well know that Dempster’s combination rule is very important for multi-source combination. However, when fusing the high conflicting evidences, counter-intuitive conditions often occur. For example, suppose there are two options A and B. There are ten experts in total. 1 expert is very sure that it is A (assuming BPA is 0.99). 9 experts choose B, but the BPA is not very high (assuming the BPA is 0.6 each). The choice is A after using the Dempster’s combination rule. However, obviously, we are more certain that the choice of 9 experts is correct. In DDERM method, based on the Deng entropy, the higher the entropy value gives higher weight, which can effectively solve the drawbacks of Dempster’s combination rule about conflicts. In the example above, using this approach effectively downplays the impact of individual expert errors on the overall assessment.
Table 12. Comparison of different method of software risk assessment.
Table 12. Comparison of different method of software risk assessment.
LiteratureProcessing MethodCalculation of WeightsComplexitySubjective/
Objectivity
[68,77]Fuzzy set theory and hierarchical structure analysisGiven in advanceConvert the linguistic value to triangular fuzzy number and multiply the fuzzy numbersWeights are static and relatively subjective
[69,70]Fuzzy DEMATEL + FMCDM + TODIM/DEMATEL + ANFIS MCDM + IF-TODIMFuzzy DEMATEL + Fuzzy TODIMMultiple adjustments fuzzy matrixUsing integrated fuzzy approaches, assessment results are relatively objective
[67]EntropyEntropy weight methodSimple calculationUncertainty is taken into account and relatively objective
Our methodDST+Deng entropy+Risk MatrixDeng entropyUsing data-driven model that does not need to calculate and adjust parameters many timesConsiders uncertainty and conflicts between experts, more objectivity

6. Conclusions

The evaluation of software risks plays an important role in the field of software development. To efficiently assess the risk, in this paper, a data-driven software risk evaluation model is proposed. We first discussed the drawbacks of the existing methods. Then, 11 software risks were identified. The software risk breakdown structure and risk matrix were illustrated. Furthermore, an evidential software risk evaluation model based on DST, Deng entropy and risk matrix were proposed. Finally, we applied the proposed method to a case application and compared our results to existing methods. By comparison, the proposed method can express more uncertainties and help the domain experts to express their opinions effectively. Meanwhile, it can adjust the expert assessment value and deal with the conflicting values in the assessment. In short, our method not only overcomes the complexity of matrix method, but also improves the ability to handle uncertainty. However, some limitations are also highlighted. More software risk factors would increase the validity of the results, and multiple weights were not considered. The evaluation process needs to be repeated when the risk factors change, since risk assessment is a dynamic process that exists throughout the life cycle of software.
Our future research work will mainly focus on considering the weight of risk attributes, giving experts different knowledge weights and improving the model to increase its applicability.

Author Contributions

Formal analysis, X.C. and Y.D.; Methodology, Y.D.; Writing—original draft, X.C.; Writing—review & editing, X.C. All authors have read and agreed to the published version of the manuscript.

Funding

The work is partially supported by National Natural Science Foundation of China (Grant No. 61973332), JSPS Invitational Fellowships for Research in Japan (Short-term).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest. The funders had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript, or in the decision to publish the results.

References

  1. Fedushko, S.; Ustyianovych, T. Medical card data imputation and patient psychological and behavioral profile construction. Procedia Comput. Sci. 2019, 160, 354–361. [Google Scholar] [CrossRef]
  2. Pawade, D.; Pawade, Y.R.; Velankar, A.; Patel, R.K.; Mantri, Y. RAY: An App for Determining Decision-Making Power of a Person. I-Manag. J. Mob. Appl. Technol. 2016, 3, 33. [Google Scholar]
  3. Fedushko, S.; Ustyianovych, T. Operational Intelligence Software Concepts for Continuous Healthcare Monitoring and Consolidated Data Storage Ecosystem. In Proceedings of the International Conference on Computer Science, Engineering and Education Applications, Kiev, Ukraine, 21–22 January 2020; Springer: Berlin/Heidelberg, Germany, 2020; pp. 545–557. [Google Scholar]
  4. Triantafyllou, I.S.; Koutras, M.V. Reliability Properties of (n, f, k) Systems. IEEE Trans. Reliab. 2014, 63, 357–366. [Google Scholar] [CrossRef]
  5. Triantafyllou, I.S. On the Consecutive k1 and k2-out-of-n Reliability Systems. Mathematics 2020, 8, 630. [Google Scholar] [CrossRef] [Green Version]
  6. Boehm, B. Software risk management. In Proceedings of the European Software Engineering Conference, University of Warwick, Coventry, UK, 11–15 September 1989; Springer: Berlin/Heidelberg, Germany, 1989; pp. 1–19. [Google Scholar]
  7. Verdon, D.; McGraw, G. Risk analysis in software design. IEEE Secur. Priv. 2004, 2, 79–84. [Google Scholar] [CrossRef]
  8. Hu, Y.; Du, J.; Zhang, X.; Hao, X.; Ngai, E.; Fan, M.; Liu, M. An integrative framework for intelligent software project risk planning. Decis. Support Syst. 2013, 55, 927–937. [Google Scholar] [CrossRef]
  9. Fan, C.F.; Yu, Y.C. BBN-based software project risk management. J. Syst. Softw. 2004, 73, 193–203. [Google Scholar] [CrossRef]
  10. Hu, Y.; Zhang, X.; Ngai, E.; Cai, R.; Liu, M. Software project risk analysis using Bayesian networks with causality constraints. Decis. Support Syst. 2013, 56, 439–449. [Google Scholar] [CrossRef]
  11. Odzaly, E.E.; Greer, D.; Stewart, D. Agile risk management using software agents. J. Ambient Intell. Humaniz. Comput. 2018, 9, 823–841. [Google Scholar] [CrossRef] [Green Version]
  12. Li, X.; Jiang, Q.; Hsu, M.K.; Chen, Q. Support or risk? Software project risk assessment model based on rough set theory and backpropagation neural network. Sustainability 2019, 11, 4513. [Google Scholar] [CrossRef] [Green Version]
  13. Filippetto, A.S.; Lima, R.; Barbosa, J.L.V. A risk prediction model for software project management based on similarity analysis of context histories. Inf. Softw. Technol. 2021, 131, 106497. [Google Scholar] [CrossRef]
  14. Authority, E.F.S. Guidance on expert knowledge elicitation in food and feed safety risk assessment. EFSA J. 2014, 12, 3734. [Google Scholar]
  15. Bolger, F. The selection of experts for (probabilistic) expert knowledge elicitation. In Elicitation; Springer: Berlin/Heidelberg, Germany, 2018; pp. 393–443. [Google Scholar]
  16. O’Hagan, A. Expert knowledge elicitation: Subjective but scientific. Am. Stat. 2019, 73, 69–81. [Google Scholar] [CrossRef] [Green Version]
  17. Yazdi, M.; Zarei, E. Step Forward on How to Treat Linguistic Terms in Judgment in Failure Probability Estimation. In Linguistic Methods Under Fuzzy Information in System Safety and Reliability Analysis; Springer: Berlin/Heidelberg, Germany, 2022; pp. 193–200. [Google Scholar]
  18. Zadeh, L.A. Fuzzy sets. In Fuzzy Sets, Fuzzy Logic, and Fuzzy Systems: Selected Papers by Lotfi a Zadeh; World Scientific: Singapore, 1996; pp. 394–432. [Google Scholar]
  19. Xiao, F. CaFtR: A Fuzzy Complex Event Processing Method. Int. J. Fuzzy Syst. 2021, 24, 1098–1111. [Google Scholar] [CrossRef]
  20. Dempster, A.P. Upper and lower probabilities induced by a multivalued mapping. In Classic Works of the Dempster-Shafer Theory of Belief Functions; Springer: Berlin/Heidelberg, Germany, 2008; pp. 57–72. [Google Scholar]
  21. Shafer, G. A Mathematical Theory of Evidence; Princeton University Press: Princeton, NJ, USA, 1976. [Google Scholar]
  22. Deng, Y. Deng entropy. Chaos Solitons Fractals 2016, 91, 549–553. [Google Scholar] [CrossRef]
  23. Garvey, P.R.; Lansdowne, Z.F. Risk matrix: An approach for identifying, assessing, and ranking program risks. Air Force J. Logist. 1998, 22, 18–21. [Google Scholar]
  24. Song, Y.; Fu, Q.; Wang, Y.F.; Wang, X. Divergence-based cross entropy and uncertainty measures of Atanassov’s intuitionistic fuzzy sets with their application in decision making. Appl. Soft Comput. 2019, 84, 105703. [Google Scholar] [CrossRef]
  25. Song, M.; Sun, C.; Cai, D.; Hong, S.; Li, H. Classifying vaguely labeled data based on evidential fusion. Inf. Sci. 2022, 583, 159–173. [Google Scholar] [CrossRef]
  26. Khalaj, M.; Khalaj, F. An improvement decision-making method by similarity and belief function theory. In Communications in Statistics-Theory and Methods; Taylor & Francis: Abingdon, UK, 2021; pp. 1–19. [Google Scholar]
  27. Zhang, L.; Wang, Y.; Wu, X. Cluster-based information fusion for probabilistic risk analysis in complex projects under uncertainty. Appl. Soft Comput. 2021, 104, 107189. [Google Scholar] [CrossRef]
  28. Li, Y.; Deng, Y. Generalized Ordered Propositions Fusion Based on Belief Entropy. Int. J. Comput. Commun. Control 2018, 13, 792–807. [Google Scholar] [CrossRef]
  29. Su, X.; Li, L.; Shi, F.; Qian, H. Research on the fusion of dependent evidence based on mutual information. IEEE Access 2018, 6, 71839–71845. [Google Scholar] [CrossRef]
  30. Moral-García, S.; Abellán, J. Required mathematical properties and behaviors of uncertainty measures on belief intervals. Int. J. Intell. Syst. 2021, 36. [Google Scholar] [CrossRef]
  31. Ghosh, N.; Saha, S.; Paul, R. iDCR: Improved Dempster Combination Rule for multisensor fault diagnosis. Eng. Appl. Artif. Intell. 2021, 104, 104369. [Google Scholar] [CrossRef]
  32. Wang, T.; Liu, W.; Cabrera, L.V.; Wang, P.; Wei, X.; Zang, T. A novel fault diagnosis method of smart grids based on memory spiking neural P systems considering measurement tampering attacks. Inf. Sci. 2022, 596, 520–536. [Google Scholar] [CrossRef]
  33. Fan, J.; Qi, Y.; Liu, L.; Gao, X.; Li, Y. Application of an information fusion scheme for rolling element bearing fault diagnosis. Meas. Sci. Technol. 2021, 32, 075013. [Google Scholar] [CrossRef]
  34. Song, X.; Xiao, F. Combining time-series evidence: A complex network model based on a visibility graph and belief entropy. In Applied Intelligence; Springer: Berlin/Heidelberg, Germany, 2022. [Google Scholar] [CrossRef]
  35. Bezerra, E.D.C.; Teles, A.S.; Coutinho, L.R.; da Silva e Silva, F.J. Dempster–Shafer Theory for Modeling and Treating Uncertainty in IoT Applications Based on Complex Event Processing. Sensors 2021, 21, 1863. [Google Scholar] [CrossRef]
  36. Elmore, P.A.; Petry, F.E.; Yager, R.R. Dempster–Shafer approach to temporal uncertainty. IEEE Trans. Emerg. Top. Comput. Intell. 2017, 1, 316–325. [Google Scholar] [CrossRef]
  37. Deng, Y. Random Permutation Set. Int. J. Comput. Commun. Control 2022, 17, 4542. [Google Scholar] [CrossRef]
  38. Shams, G.; Hatefi, S.M.; Nemati, S. A Dempster-Shafer evidence theory for environmental risk assessment in failure modes and effects analysis of Oil and Gas Exploitation Plant. Sci. Iran. 2022. [Google Scholar] [CrossRef]
  39. Hatefi, S.M.; Basiri, M.E.; Tamošaitienė, J. An evidential model for environmental risk assessment in projects using dempster–shafer theory of evidence. Sustainability 2019, 11, 6329. [Google Scholar] [CrossRef] [Green Version]
  40. Deng, Y. Information Volume of Mass Function. Int. J. Comput. Commun. Control 2020, 15, 3983. [Google Scholar] [CrossRef]
  41. Xiao, F. An improved method for combining conflicting evidences based on the similarity measure and belief function entropy. Int. J. Fuzzy Syst. 2018, 20, 1256–1266. [Google Scholar] [CrossRef]
  42. Xiong, L.; Su, X.; Qian, H. Conflicting evidence combination from the perspective of networks. Inf. Sci. 2021, 580, 408–418. [Google Scholar] [CrossRef]
  43. Chen, L.; Li, Z.; Deng, X. Emergency alternative evaluation under group decision makers: A new method based on entropy weight and DEMATEL. Int. J. Syst. Sci. 2020, 51, 570–583. [Google Scholar] [CrossRef]
  44. Gao, X.; Su, X.; Qian, H.; Pan, X. Dependence assessment in Human Reliability Analysis under uncertain and dynamic situations. In Nuclear Engineering and Technology; Elsevier: Amsterdam, The Netherlands, 2021. [Google Scholar] [CrossRef]
  45. Ni, H.; Chen, A.; Chen, N. Some extensions on risk matrix approach. Saf. Sci. 2010, 48, 1269–1278. [Google Scholar] [CrossRef]
  46. Ruan, X.; Yin, Z.; Frangopol, D.M. Risk matrix integrating risk attitudes based on utility theory. Risk Anal. 2015, 35, 1437–1447. [Google Scholar] [CrossRef]
  47. Jianxing, Y.; Haicheng, C.; Shibo, W.; Haizhao, F. A novel risk matrix approach based on cloud model for risk assessment under uncertainty. IEEE Access 2021, 9, 27884–27896. [Google Scholar] [CrossRef]
  48. Wen, T.; Cheong, K.H. The fractal dimension of complex networks: A review. Inf. Fusion 2021, 73, 87–102. [Google Scholar] [CrossRef]
  49. Xie, D.; Xiao, F.; Pedrycz, W. Information Quality for Intuitionistic Fuzzy Values with Its Application in Decision Making. In Engineering Applications of Artificial Intelligence; Elsevier: Amsterdam, The Netherlands, 2021. [Google Scholar]
  50. Wang, Z.; Xiao, F.; Ding, W. Interval-valued intuitionistic fuzzy Jenson-Shannon divergence and its application in multi-attribute decision making. In Applied Intelligence; Springer: Berlin/Heidelberg, Germany, 2022. [Google Scholar] [CrossRef]
  51. Xiao, F. A distance measure for intuitionistic fuzzy sets and its application to pattern classification problems. IEEE Trans. Syst. Man Cybern. Syst. 2021, 51, 3980–3992. [Google Scholar] [CrossRef]
  52. Babajanyan, S.; Allahverdyan, A.; Cheong, K.H. Energy and entropy: Path from game theory to statistical mechanics. Phys. Rev. Res. 2020, 2, 043055. [Google Scholar] [CrossRef]
  53. Cheong, K.H.; Koh, J.M.; Jones, M.C. Paradoxical survival: Examining the parrondo effect across biology. BioEssays 2019, 41, 1900027. [Google Scholar] [CrossRef] [PubMed]
  54. Pawade, D.Y. Analyzing the Impact of Search Engine Optimization Techniques on Web Development Using Experiential and Collaborative Learning Techniques. Int. J. Mod. Educ. Comput. Sci. 2021, 2, 1–10. [Google Scholar] [CrossRef]
  55. Wang, H.; Fang, Y.P.; Zio, E. Resilience-oriented optimal post-disruption reconfiguration for coupled traffic-power systems. Reliab. Eng. Syst. Saf. 2022, 222, 108408. [Google Scholar] [CrossRef]
  56. Khalaj, F.; Khalaj, M. Developed cosine similarity measure on belief function theory: An application in medical diagnosis. In Communications in Statistics-Theory and Methods; Taylor & Francis: Abingdon, UK, 2020; pp. 1–12. [Google Scholar]
  57. Xiao, F. CEQD: A complex mass function to predict interference effects. IEEE Trans. Cybern. 2021. [Google Scholar] [CrossRef]
  58. Yan, Z.; Zhao, H.; Mei, X. An improved conflicting-evidence combination method based on the redistribution of the basic probability assignment. In Applied Intelligence; Springer: Berlin/Heidelberg, Germany, 2021; pp. 1–27. [Google Scholar]
  59. Cheng, C.; Xiao, F. A distance for belief functions of orderable set. Pattern Recognit. Lett. 2021, 145, 165–170. [Google Scholar] [CrossRef]
  60. Cui, H.; Zhou, L.; Li, Y.; Kang, B. Belief entropy-of-entropy and its application in the cardiac interbeat interval time series analysis. Chaos Solitons Fractals 2022, 155, 111736. [Google Scholar] [CrossRef]
  61. Khalaj, M.; Tavakkoli-Moghaddam, R.; Khalaj, F.; Siadat, A. New definition of the cross entropy based on the Dempster-Shafer theory and its application in a decision-making process. Commun. Stat.-Theory Methods 2020, 49, 909–923. [Google Scholar] [CrossRef]
  62. Gao, X.; Pan, L.; Deng, Y. A generalized divergence of information volume and its applications. Eng. Appl. Artif. Intell. 2022, 108, 104584. [Google Scholar] [CrossRef]
  63. Balakrishnan, N.; Buono, F.; Longobardi, M. A unified formulation of entropy and its application. Phys. A Stat. Mech. Appl. 2022, 569, 127214. [Google Scholar] [CrossRef]
  64. Song, Y.; Deng, Y. Entropic explanation of power set. Int. J. Comput. Commun. Control 2021, 16. [Google Scholar] [CrossRef]
  65. Qiang, C.; Deng, Y.; Cheong, K.H. Information fractal dimension of mass function. Fractals 2022. [Google Scholar] [CrossRef]
  66. Kazemi, M.R.; Tahmasebi, S.; Buono, F.; Longobardi, M. Fractional deng entropy and extropy and some applications. Entropy 2021, 23, 623. [Google Scholar] [CrossRef] [PubMed]
  67. Song, H.; Wu, D.; Li, M.; Cai, C.; Li, J. An entropy based approach for software risk assessment: A perspective of trustworthiness enhancement. In Proceedings of the 2nd International Conference on Software Engineering and Data Mining, Chengdu, China, 23–25 June 2010; pp. 575–578. [Google Scholar]
  68. Lee, H.M. Group decision making using fuzzy sets theory for evaluating the rate of aggregative risk in software development. Fuzzy Sets Syst. 1996, 80, 261–271. [Google Scholar] [CrossRef]
  69. Sangaiah, A.K.; Samuel, O.W.; Li, X.; Abdel-Basset, M.; Wang, H. Towards an efficient risk assessment in software projects–Fuzzy reinforcement paradigm. Comput. Electr. Eng. 2018, 71, 833–846. [Google Scholar] [CrossRef]
  70. Suresh, K.; Dillibabu, R. A novel fuzzy mechanism for risk assessment in software projects. Soft Comput. 2020, 24, 1683–1705. [Google Scholar] [CrossRef]
  71. Hsieh, M.Y.; Hsu, Y.C.; Lin, C.T. Risk assessment in new software development projects at the front end: A fuzzy logic approach. J. Ambient Intell. Humaniz. Comput. 2018, 9, 295–305. [Google Scholar] [CrossRef]
  72. Kumar, C.; Yadav, D.K. A probabilistic software risk assessment and estimation model for software projects. Procedia Comput. Sci. 2015, 54, 353–361. [Google Scholar] [CrossRef] [Green Version]
  73. Iranmanesh, S.H.; Khodadadi, S.B.; Taheri, S. Risk assessment of software projects using fuzzy inference system. In Proceedings of the 2009 International Conference on Computers & Industrial Engineering, Troyes, France, 6–9 July 2009; pp. 1149–1154. [Google Scholar]
  74. Boehm, B.W. Software risk management: Principles and practices. IEEE Softw. 1991, 8, 32–41. [Google Scholar] [CrossRef]
  75. Triantafyllou, I.S. Reliability study of military operations: Methods and applications. In Military Logistics; Springer: Berlin/Heidelberg, Germany, 2015; pp. 159–170. [Google Scholar]
  76. Koutras, M.V.; Triantafyllou, I.S.; Eryilmaz, S. Stochastic comparisons between lifetimes of reliability systems with exchangeable components. Methodol. Comput. Appl. Probab. 2016, 18, 1081–1095. [Google Scholar] [CrossRef]
  77. Lee, H.M. Applying fuzzy set theory to evaluate the rate of aggregative risk in software development. Fuzzy Sets Syst. 1996, 79, 323–336. [Google Scholar] [CrossRef]
Figure 1. Software risk breakdown structure.
Figure 1. Software risk breakdown structure.
Mathematics 10 02325 g001
Figure 2. Risk matrix and its level.
Figure 2. Risk matrix and its level.
Mathematics 10 02325 g002
Figure 3. The flowchart of DDERM.
Figure 3. The flowchart of DDERM.
Mathematics 10 02325 g003
Figure 4. Risk value.
Figure 4. Risk value.
Mathematics 10 02325 g004
Table 1. Risk factors of software projects.
Table 1. Risk factors of software projects.
Risk Factors in the LiteratureReferences
Requirement risk, user risk, developer risk, project management risk, development risk, environment risk[67]
Personnel risk, system requirement risk, schedules and budgets risk, developing technology risk, external resource risk, performance risk[68]
Requirements risk, estimations risk, planning risk, team organization risk, project management risk[69]
Schedule risk, product risk, platform risk, personnel risk, process risk, reuse risk[70]
Organizational environment risk, user risk, requirement risk, project complexity risk, team risk, planning risk[71]
Requirement specification, design and implementation, integration and testing, development process and system management process, management methods, work environment, resources, contract and program interface[72]
Corporate environment, sponsorship and ownership, relationship management, project management, scope, requirements, funding, scheduling and planning, development process, personnel and staffing, technology, external dependencies[73]
Table 2. Risk list.
Table 2. Risk list.
AttributeRisk ItemCode
Requirement (C1)Ambiguous requirementsR1
Misunderstanding the requirementsR2
Frequent requirement changesR3
Lack of effective requirement change managementR4
Scheduling & Planning (C2)The plan is too ideal to be realizedS1
Too many interruptionsS2
Unspecified project milestonesS3
Organize & Manage (C3)Lack of resource managementO1
Inadequate project monitoring and controllingO2
Personnel (C4)Lack of skills and experienceP1
Leave or sickP2
Table 3. Levels of risk probability.
Table 3. Levels of risk probability.
LevelLinguistic Terms
1Very unlikely
2Unlikely
3Even
4Possible
5Very possible
Table 4. Levels of risk severity.
Table 4. Levels of risk severity.
LevelLinguistic Terms
AVery little
BLittle
CMedium
DSerious
ECatastrophic
Table 5. Levels of risk.
Table 5. Levels of risk.
LevelLinguistic Terms
ILow
IIMedium
IIISignificant
IVHigh
Table 6. Risk matrix.
Table 6. Risk matrix.
P × S Very LittleLittleMediumSeriousCatastrophic
Very PossibleMediumSignificantSignificantHighHigh
PossibleMediumMediumSignificantSignificantHigh
EvenLowMediumMediumSignificantHigh
UnlikelyLowMediumMediumSignificantSignificant
Very unlikelyLowLowMediumMediumSignificant
Table 7. Weight of risk.
Table 7. Weight of risk.
ABCDE
5510152025
448121620
33691215
2246810
112345
Table 8. Experts assignment.
Table 8. Experts assignment.
RisksExpertsPS
12345ABCDE
R1E1 80% 60%
E2 30%60% 30%60%
E3 20%60% 50%50%
R2E1 70%20% 45%55%
E2 90% 40%50%
E3 50%50% 80%
R3E1 90% 90%
E2 85% 85%
E3 50%40% 50%40%
R4E190% 80%
E250%50% 40%40%
E360%40% 60%40%
S1E190% 75%
E230%50% 60%40%
E3 80% 80%
S2E1 85% 80%
E2 90% 80%
E3 40%50% 40%50%
S3E1 50%50% 85%
E2 60%30% 75%
E3 60% 80%
O1E1 85% 60%40%
E2 60% 80%20%
E3 80% 90%
O2E1 90% 85%
E2 85% 70%
E3 50%35% 50%40%
P1E140%60% 25%75%
E2 80% 75%
E385% 80%
P2E1 60%40% 50%
E2 85% 60%
E3 50%35% 80%20%
Table 9. Fuse results.
Table 9. Fuse results.
RisksPS
12345ABCDE
R1 0.46910.5186 0.73420.2389
R2 0.94270.0560 0.78540.2114
R3 0.46620.5285 0.62980.3631
R40.89980.1001 0.8140.179
S10.26070.7258 0.59020.3955
S2 0.52850.4462 0.44870.5399
S3 0.23890.7342 0.26230.7186
O1 0.76230.1965 0.37150.6283
O2 0.48530.5082 0.41760.5638
P10.36300.6304 0.42560.5587
P2 0.31740.6781 0.51210.3805
Table 10. The result of DDERM.
Table 10. The result of DDERM.
RisksLevelValueOverall Value
PSRPSR
R14DIII0.46910.7342165.5106
4EIV0.46910.2389202.2413
5DIV0.51860.7342207.6151
5EIV0.51860.2389253.0973
R23DIII0.94270.7854128.8848
3EIV0.94270.2114152.9893
4DIII0.05600.7854160.7037
4EIV0.05600.2114200.2368
R33CII0.46620.629892.6425
3DIII0.46620.3631122.0313
4CIII0.52850.6298123.9941
4DIII0.52850.3631163.0704
R41BI0.89980.814021.4649
1CII0.89980.179030.4831
2BII0.10010.814040.3259
2CII0.10010.179060.1075
S11AI0.26070.590210.1539
1BI0.26070.395520.2062
2AI0.72580.590220.8567
2BII0.72580.395541.1482
S22CII0.52850.448761.4228
2DIII0.52850.539982.2826
3CII0.46760.448791.8827
3DIII0.46760.5399123.0204
S34BII0.23890.262380.5013
4CIII0.23890.7186122.0600
5BIII0.73420.2623101.9258
5CIII0.73420.7186157.9139
O12BII0.76230.371541.1328
2CII0.76230.638362.8737
3BII0.19650.371560.4380
3CII0.19650.638391.1111
O23CII0.48530.417691.8240
3DIII0.48530.5638123.2833
4CIII0.50820.4176122.5467
4DIII0.50820.5638164.5844
P11BI0.36300.425620.3090
1CII0.36300.558730.6084
2BII0.63040.425641.0732
2CII0.63040.558762.1132
P23BII0.31740.512160.9752
3CII0.31740.380591.0870
4BII0.67810.512182.7780
4CIII0.67810.3805123.0962
Table 11. Final ranking for each risk level.
Table 11. Final ranking for each risk level.
RiskIIIIIIIV
R1005.5106 [3]7.6151 [1]
R2008.8848 [1]2.9893 [2]
R302.6425 [3]3.9942 [5]0
R41.4649 [1]0.4832 [9]00
S10.8567 [2]1.1482 [7]00
S202.2827 [4]3.0204 [7]0
S300.5013 [8]7.9139 [2]0
O102.8737 [1]00
O201.8240 [6]4.5844 [4]0
P10.3090 [3]2.1132 [5]00
P202.7780 [2]3.0962 [6]0
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Chen, X.; Deng, Y. An Evidential Software Risk Evaluation Model. Mathematics 2022, 10, 2325. https://doi.org/10.3390/math10132325

AMA Style

Chen X, Deng Y. An Evidential Software Risk Evaluation Model. Mathematics. 2022; 10(13):2325. https://doi.org/10.3390/math10132325

Chicago/Turabian Style

Chen, Xingyuan, and Yong Deng. 2022. "An Evidential Software Risk Evaluation Model" Mathematics 10, no. 13: 2325. https://doi.org/10.3390/math10132325

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop