Next Article in Journal
Imperfect-Information Game AI Agent Based on Reinforcement Learning Using Tree Search and a Deep Neural Network
Previous Article in Journal
Computing Offloading Strategy in Mobile Edge Computing Environment: A Comparison between Adopted Frameworks, Challenges, and Future Directions
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Blockchain-Based Incentive Mechanism for Sharing Cyber Threat Intelligence

1
School of Information Network Security, People’s Public Security University of China, Beijing 100038, China
2
MPS Information Classified Security Protection Evaluation Center, Beijing 100142, China
*
Authors to whom correspondence should be addressed.
Electronics 2023, 12(11), 2454; https://doi.org/10.3390/electronics12112454
Submission received: 6 May 2023 / Revised: 20 May 2023 / Accepted: 27 May 2023 / Published: 29 May 2023

Abstract

:
With the development of the Internet, cyberattacks are becoming increasingly complex, sustained, and organized. Cyber threat intelligence sharing is one of the effective ways to alleviate the pressure on organizational or individual cyber security defense. However, the current cyber threat intelligence sharing lacks effective incentive mechanisms, resulting in mutual distrust and a lack of motivation to share among sharing members, making the security of sharing questionable. In this paper, we propose a blockchain-based cyber threat intelligence sharing mechanism (B-CTISM) to address the problems of free riding and lack of trust among sharing members faced in cyber threat intelligence sharing. We use evolutionary game theory to analyze the incentive strategy; the resulting evolutionarily stable strategy achieves the effect of promoting sharing and effectively curbing free-riding behavior. Then, the incentive strategy is deployed to smart contracts running in the trusted environment of blockchain, whose decentralization and tamper-evident properties can provide a trusted environment for participating members and establish trust without a third-party central institution to achieve secure and efficient cyber threat intelligence sharing. Finally, the effectiveness of the B-CTISM in facilitating and regulating threat intelligence sharing is verified through experimental simulation and comparative analysis.

1. Introduction

With the increasingly complex security posture of cyberspace, the traditional passive cybersecurity defense has begun to be stretched to the limit. However, given their limited investment in manpower and resources for cybersecurity, most SMEs (small and medium-sized enterprises) struggle to defend against a wide range of cyberattacks such as ransomware, APT (advanced persistent threat), and DDoS (distributed denial of service) [1]. For organizations with limited cybersecurity expertise and tight budgets, utilizing cyber threat intelligence (CTI) sharing to help defend against cyber threats is a good approach [2]. CTI sharing refers to the collection, analysis, and sharing of data among multiple organizations for the purpose of early warning and advance action against various cyberattacks that have occurred or are likely to occur. An industry survey conducted by Sans found that approximately 81% of security professionals believe that the use of cyber threat intelligence has been effective in enhancing their organization’s security [3].
Although cyber threat intelligence sharing can help mitigate the cybersecurity pressures on organizations, in practice it faces a number of problems: (1) The lack of motivation and incentives for sharing leads to low participation rates. CTI sharing platforms face a major issue in distributing benefits fairly to organizations or individuals. This is particularly true in a decentralized sharing environment, where the absence of reasonable and effective incentive mechanisms leads to low motivation for sharing [4]. (2) Free-riding behavior, where certain members enjoy the information provided by others but hardly contribute any themselves. This behavior negatively impacts the motivation of other members to share information and, when the number of free riders grows beyond a certain level, the information sharing platform may lose balance and collapse [5]. (3) Lack of trust among sharing members. To address trust issues, traditional CTI sharing platforms often require trusted third-party participation. However, this increases the sharing cost and reduces sharing efficiency [6].
In this paper, we propose an incentive mechanism called B-CTISM (blockchain-based cyber threat intelligence sharing mechanism), which can promote cyber threat intelligence sharing, encourage organizations and individuals to actively participate in sharing, and improve the motivation and efficiency of sharing. The main contributions are as follows:
(1)
Employing evolutionary game theory to analyze and derive corresponding incentive strategies that can curb the free-riding behavior of participating members in threat intelligence sharing, so that the free-riding members eventually choose to participate in the sharing or withdraw from the platform under the effect of the incentive mechanism.
(2)
The incentive mechanism is deployed into smart contracts that run in the trusted environment of blockchain to dynamically regulate the incentive strategy. The decentralization and tamper-proof characteristics of blockchain solve the lack of trust among the sharing members and no longer require the participation of trusted third parties, thereby realizing the secure and efficient sharing of cyber threat intelligence.
The structure of this article is arranged as follows: Section 2 elucidates the related work and research status of cyber threat intelligence sharing and blockchain-based incentive mechanisms; Section 3 constructs the B-CTISM threat intelligence sharing incentive mechanism and elaborates on the specific content of the mechanism. Section 4 designs blockchain smart contract algorithms to implement the incentive mechanism scheme; Section 5 and Section 6 design simulation experiments to analyze and verify the effectiveness of this proposed scheme and conclude the article.

2. Related Work

2.1. Cyber Threat Intelligence Sharing

Cyber threat intelligence is a collection of information on threat behavior, vulnerabilities, malicious software, attackers, attack methods, tools, and other aspects that may pose potential or direct harm to organizations or individuals. This information can help organizations or individuals assess current network security threats and trends and then make decisions from there. In short, information that may cause harm or loss of benefits to organizations or individuals can be referred to as threat intelligence, which is currently the default definition across the industry [7].
Cyber threat intelligence can be derived through various means, including detailed malware information, indicators of compromise (IoC), and the specific tactics used by attackers to steal information. This information enables organizations to update their defense mechanisms against potential cyber threats. Possible sources of cyber threat intelligence may include historical events, open source intelligence (OSINT), network defense systems, threat information providers, and network information sharing exchanges. Sharing cyber threat intelligence has emerged as a proactive defense mechanism against the rising number of cyberattacks [8,9,10]. Cyber threat intelligence sharing can break information silos, quickly perceive cyberspace security posture, and proactively defend against threats. By sharing threat intelligence, organizations can keep abreast of the latest threat situation, identify and prevent threats, and thus improve their own security defense capabilities. Moreover, sharing threat intelligence among different organizations can also avoid duplication of efforts, reduce resource waste, strengthen the security of the entire cybersecurity ecosystem, and improve the overall security situation [6].
In the area of cyber threat intelligence sharing, MITRE Inc. in the United States developed the adversarial tactics, techniques, and common knowledge (ATT&CK) framework [11] and defined matrices to help stakeholders understand the tactics, techniques, and procedures (TTP) deployed by attackers. To address the need for intelligence sharing, the industry has developed a series of standards for the exchange of threat intelligence. Structured threat information expression (STIX) is an intelligence expression specification developed and maintained by OASIS-CTI-TC for modeling, analyzing, and exchanging cyber threat intelligence and has been updated to STIX version 2.0 [12]. Borges et al. [13] proposed a methodological framework for collecting, organizing, filtering, sharing, and visualizing cyber threat data to mitigate attacks and remediate vulnerabilities, based on an eight-step cyber threat intelligence model with a timeline visualization of threat information and analytical data insight. Purohit [14] et al. proposed a threat intelligence sharing and defense system called DefenseChain that allows organizations to have incentive-based and trust-based cooperation to mitigate the impact of cyberattacks.

2.2. Blockchain Technology

Blockchain is a distributed digital ledger for storing electronic data, which is decentralized and tamper-proof [15,16]. Blockchain-based incentives can serve as a solution to the above problem. Blockchain can establish reliable trust relationships between multiple parties who do not know each other and blockchain-based cryptocurrencies have gained wide attention and recognition in the real world, verifying their reliability. Ethereum is a distributed public blockchain network based on blockchain technology that allows developers to create and distribute decentralized applications [17]. It is the first blockchain platform specifically designed to implement smart contracts; it provides more powerful and flexible smart contract functions and is one of the most popular smart contract platforms today. Truffle is an open-source development framework for Ethereum smart contracts. It provides a set of tools and resources that make smart contract development and testing easier and more efficient. Truffle includes a Ganache [18] desktop client that simulates a complete Ethereum network environment, enabling developers to develop, test, and debug their smart contracts locally. Truffle also provides built-in smart contract compilers, deployment tools, contract libraries, and other features that help developers to streamline the entire development process. The smart contract deployment tests for the paper will be conducted using Ganache.
Since the data on the blockchain cannot be tampered with, the cryptocurrency or virtual points obtained by organizations or individuals participating in the sharing will be recorded intact, faithfully reflecting the contribution and consumption of each participating member. A smart contract is a computer program stored on the blockchain that is automatically executed when pre-defined trigger conditions are met [19,20]. Unlike real contracts, the execution of a smart contract does not require the participation or supervision of a trusted third party; the content of the contract cannot be changed and can be executed automatically when the trigger conditions are met. Therefore, it can establish a trust relationship between parties that lack trust.

2.3. Shared Incentive Mechanism

Since cyber threat intelligence is valuable information, contributors may lack the motivation to share their intelligence without appropriate compensation. The lack of effective sharing mechanisms hinders the progress of threat intelligence sharing and has garnered increasing scholarly attention. Existing incentive schemes often rely on trusted third parties, such as banks, to act as guarantors or incentive rule makers and administrators, but trusted third parties suffer from systemic trust deficit and privacy leakage due to their opaque control, vulnerability to attacks, and high costs [21].
The blockchain-based incentive mechanism uses cryptocurrency or virtual points as the incentive method, which is decentralized, secure, and robust. Cryptocurrency or virtual points are put into deployment on the blockchain as rewards for sharing and the sharing strategy is executed in the form of smart contracts. Once the program is on the blockchain, it will follow the code execution and both organizers and participants cannot tamper with or violate the agreement. Organizers and participants can cooperate with each other without the trust and supervision of third parties.
During the research of the blockchain-based incentive mechanism, Ai et al. [22] proposed a blockchain-based consensus incentive mechanism, termed the “ABC mechanism”, from the perspective of mechanism design by applying continuous double auction theory for research on the incentive mechanism of blockchain. There are four phases in the ABC mechanism, namely initiation, auction, completion, and confirmation phases. The continuous double auction model is employed to facilitate the real-time storage of transactions and tackles the problems of fairness and impartiality in existing consensus mechanisms. Cheng et al. [23] presented a reverse auction-based incentive mechanism that stimulates the participation of IoT managers by combining virtual-reputation incentives with a reverse auction mechanism. The proposed mechanism is implemented in a smart contract on the blockchain to provide a privacy-preserving and efficient solution for task execution. Ding et al. [24] designed an incentive mechanism for IoT blockchain networks to motivate IoT devices to purchase more computing resources from edge servers so that a secure blockchain network can be established while ensuring the revenue of the blockchain platform. The article models the interaction between the blockchain platform and IoT devices as a two-stage Stackelberg game, proves the existence and uniqueness of the Stackelberg equilibrium point, and designs an efficient algorithm to calculate the Stackelberg equilibrium point.
Zhang et al. [25] developed an evolutionary game-based model for data sharing incentives that includes a smart contract data sharing incentive engine on a pre-existing blockchain platform. The model addresses issues such as low user participation motivation and the lack of corresponding data sharing incentives faced by data sharing communities. Motepalli et al. [26] developed a reward mechanism framework that can be applied to multiple PoS consensus protocol blockchains to solve the problem of free riding and the nothing-at-stake attack (NBI), using evolutionary game theory to analyze how the participants’ behavior changes with the reward mechanism and to demonstrate the importance of punishment in maintaining the integrity of the ledger. Liang et al. [21] suggested a blockchain and auction-based incentive mechanism that maximizes crowdsourcing utility and social welfare while reducing social costs. The advanced system entities are incentivized to optimize expenses, resulting in a maximized social hierarchy.
Although many scholars have studied blockchain-based incentive mechanisms for different scenarios and proposed various incentive schemes, the existing incentive mechanisms lack theoretical analysis in the aspect of cyber threat intelligence sharing. The incentive policies are formulated and executed unilaterally by the sharing platform and the execution credibility relies on the reputation of the sharing platform. Due to the lack of theoretical analysis and dynamic regulation, the sharing policy cannot always maintain the most appropriate state and does not have a good inhibiting effect on the free-riding problem. Therefore, in this paper we propose a blockchain-based threat intelligence sharing incentive mechanism, named B-CTISM, which utilizes evolutionary game theory as an analytical tool to construct the sharing mechanism and analyze the stable state of strategy evolution. In addition, the mechanism adopts smart contracts to execute incentive policies and, before the deployment on the blockchain, it requires the agreement of core members. Once deployed on the blockchain, the smart contract can strictly enforce the incentive policies.

3. The Sharing Incentive Mechanism Construction

Traditionally, game theory has been seen as a theory about how rational individuals act and has been based on high levels of rationality for a long time. To address the lack of temporal evolution inherent in traditional game theory that primarily focuses on equilibrium, evolutionary game theory was born, derived from basic concepts of Darwinian evolution [27]. Unlike traditional game theory that assumes perfect rationality among participants, evolutionary game theory allows for finitely rational participants who identify the best evolutionary strategy via continuous trial and error [28]. In evolutionary game theory, good strategies propagate throughout the population of participants via the Darwinian evolution principle rather than being exclusively learned by rational individuals.
Participants in a threat intelligence sharing platform may not be fully rational and the strategies they employ can change over time as the game evolves. Sharing members attempt to maximize their interests in diverse sharing environments by learning, imitating, and taking other strategies. If a member shares threat intelligence but does not experience any benefits (or if other members gain benefits from the shared intelligence without reciprocating), he is likely to cease sharing information. However, if members receive benefits, such as rewards, network protection, or increased institutional reputation from sharing threat intelligence, they may be more inclined to continue to share in the future. Accordingly, we choose to adopt evolutionary game theory instead of conventional game theory to model and analyze the incentive mechanism in cyber threat intelligence sharing.

3.1. Sharing Strategy Assumptions

Cyber threat intelligence is a valuable and essential data source, even for specific organizations such as cybersecurity firms, as it is considered a core competency. At times, even individuals may deny sharing out of selfishness or concerns about privacy. Therefore, incentives need to be provided to motivate the sharing of cyber threat intelligence. However, in addition to the need to provide rewards to motivate members to participate in sharing, cyber threat intelligence is highly time-sensitive and its value decreases rapidly over time, so the design of the incentive mechanism also needs to take into account “zombie members” who do not participate in sharing for a long time. If there are no penalties for these negative sharing members, they can use the rewards (e.g., points) they have already earned to continue to receive the latest threat intelligence from others long afterwards and benefit from it. This will encourage other participants to imitate and eventually cause the sharing platform to collapse. Penalties are imposed on members who do not participate in sharing for long periods of time, so that the rewards they have already earned become less as time passes, making it necessary for most members who want to benefit from sharing to continue participating. Benefits in security defense can only be gained by sharing cyber threat intelligence and, if the platform does not have new cyber threat intelligence to upload, there is no way to ensure the effectiveness of security defense. Therefore, although incentives are necessary, pure incentives are not sufficient to solve the problem; penalties are also necessary and negative sharing behaviors need to be punished to promote positive sharing among participants.
The threat intelligence sharing game model has been constructed under the following assumptions in order to simplify the analysis:
  • The threat intelligence sharing platform has a defined size in terms of the number of members; each member is a finite rational participant. Specifically, the set of shared members on this platform is defined as: I = { 1 , 2 , 3 , , n } , ( n N + ) ; the strategy set for each member i I of the sharing group is s i , where s i = { s 1 , s 2 } = { s h a r e d ,   u n s h a r e d } ( i I ) . The Cartesian product of the pure strategy sets of each party in the game, which is the strategy space S = { ( s 1 1 , s 2 1 ) , ( s 1 1 , s 2 2 ) , ( s 1 2 , s 2 1 ) , ( s 1 2 , s 2 2 ) } , consists of all possible combinations of pure strategies;
  • In the initial stage of the game, participating member i I chooses to share threat intelligence with probability x and chooses not to share threat intelligence with probability 1 x , i.e., P ( s 1 ) = x , P ( s 2 ) = 1 x , x ( 0 , 1 ) ;
  • Members who participate in sharing cyber threat intelligence can obtain shared benefits U ; ω represents the scaling factor of defense benefits. The shared gain involves virtual points and reputation level enhancement granted by the sharing platform. Because the sharing platform is based on blockchain smart contracts, the virtual points are non-tamperable and have credible execution efficiency to address the issue of trust that members have with the sharing platform. Additionally, if other members also share, through complementary threat intelligence (thus breaking the information island and taking proactive defense), additional defense benefits ω U can be obtained by participants when other members share information;
  • Cyber threat intelligence is highly time-sensitive and account members who are inactive for a long time (e.g., more than the set time of one month) should be punished with points; the number of points for punishment is expressed by μ ;
  • In order to mark the contributions and historical reputation of the shared members, we introduce an account level r to distinguish different shared members. Members with higher levels will receive higher sharing profits. c and θ , respectively, represent the cost coefficient and cost-balancing parameter. In order to prevent accounts with r = 1 from pouring spam into the platform without any cost, we introduce a cost coefficient c , which is computed as c = θ r ( θ N ) . When θ = r , it reaches the account level that balances the publishing cost and publishing incentive, which is the cost-neutral account level. For example, when θ = 4 , the cost coefficient c is greater than 1 before the member account level r reaches level 4, which means that the cost of publishing threatening intelligence at this time is greater than the profit;
  • The initial points of each member are ρ , ρ > μ . When θ > 1 appears in the dynamic regulation process, the new member with r = 1 needs to incur a cost to release threat intelligence. At this time, the initial points ρ can help the new member release threat intelligence to obtain a promotion of account level. Of course, if the new member is inactive for a long time after joining, they will be punished with points μ , which also encourages new members to share network threat intelligence as soon as possible after joining.
  • To be able to actively regulate the sharing participation degree of the platform, a dynamic parameter γ is introduced, whose positive and negative values represent the cost and benefit, respectively. When the sharing benefit U is not enough to stimulate members to participate in sharing, members generally share negatively and can be encouraged to carry out sharing by giving extra points; the value of γ is set to be less than zero at this time. When the sharing degree of the platform is too high and the quality of the platform threat intelligence decreases (causing the inflation of points), it will affect the willingness of high-ranking members to share high-value information and it will be necessary to charge a cost for sharing; the value of γ is set to be greater than zero at this time;
  • k is the stock benefit scaling factor when both sides of the game choose not to participate in sharing. Although the participating members share negatively, the platform stock of cyber threat intelligence can still bring some gains to the participating members, which is denoted by k U , k ( 0 , 1 ) .
In practice, the benefit does not grow linearly with the increase of input cost, so in this paper we choose to use the function l n (natural logarithm with base e) to describe the relationship between input costs C and benefit, that is, the shared benefit U = l n ( 1 + C ) , C > 0 .
According to the above assumptions, the cyber threat intelligence sharing game model can be summarized as a triple G = ( I , S , π ) , where I is the set of platform sharing members, S is the pure strategy space, and π is the payoff function. The hybrid strategy is the probability distribution on the set S i of pure strategies of game party i . When the hybrid strategy combination λ is adopted, the probability that a certain pure strategy s = ( s 1 , , s n ) S is adopted is the product of the probabilities assigned to its pure strategy s i by the corresponding hybrid strategy of each game party
λ ( s ) = i = 1 n λ i s i
The expected benefit value brought by the hybrid strategy portfolio λ to game party i is
u i ( λ ) = s S λ ( s ) π i ( s )
Based on the above assumptions, the mixed strategy game matrix for both sides of the game is shown in Table 1:
Scenario 1: Both parties in the game of cyber threat intelligence sharing have chosen to participate, which is the desired goal state of incentive mechanism design for threat intelligence sharing platforms. Under this circumstance, participating members can obtain shared benefits U = l n ( 1 + C ) , C > 0 by sharing threat intelligence; the benefits obtained will vary depending on the node account level, that is, r l n ( 1 + C ) . To prevent attackers from injecting junk threat intelligence by registering a large number of low-level accounts, the B-CTISM mechanism introduces a cost coefficient c = θ r , θ 0 , r 1 (where θ is the cost-balancing parameter); the final profit of the participating nodes is r l n ( 1 + C ) c l n ( 1 + C ) , i.e., r θ r l n ( 1 + C ) . Moreover, both parties of the exchange can gain benefits for security defense due to the complementary intelligence regarding threats. The scaling relationship between this benefit and the shared benefit is ω . To be able to proactively regulate sharing participation when the underlying sharing benefits cannot promote sharing or when sharing is overheated and flooded with low-level information, the paper introduces a dynamic parameter γ , whose positive or negative value represents the current regulation strategy as cost or incentive. The final profit obtained by participating nodes is r θ r l n ( 1 + C ) + ω l n ( 1 + C ) γ .
Scenario 2: One of the two sides of the game chooses to share and the other chooses not to share, i.e., some members appear to be free riders, only wanting to access the threat intelligence shared by other members but are not willing to contribute their own information data. In order to curb the free-riding behavior and because of the high timeliness of cyber threat intelligence, nodes that do not participate in sharing for a long time are penalized with points μ . When participating node 1 does not share and participating node 2 shares, participating node 1 will be penalized, but since participating node 2 still shares, node 1 still gains security defense benefits due to free riding; the benefit of participating node 1 is ω l n ( 1 + C ) μ . However, due to the non-sharing of node 1, participating node 2 only gets the sharing benefits and no security defense benefits, i.e., r θ r l n ( 1 + C ) γ .
Scenario 3: Both sides of the game choose not to participate in sharing, which is the situation to be avoided by the incentive mechanism design. When both sides of the game no longer share cyber threat intelligence, although both have no more sharing benefits, the stock of cyber threat intelligence in the shared platform still allows each gaming party to obtain certain security defense benefits, except that the security defense benefits at this time are k times related to the security defense benefits when there are participating nodes sharing, i.e., k l n ( 1 + C ) , k ( 0 , 1 ) .

3.2. Sharing Strategy Analysis

In evolutionary game theory, if the growth rate of the proportional occupied by individuals using a certain strategy in a group is equivalent to the fitness of individuals using that strategy in the group, then, as long as the fitness of an individual using a certain strategy is higher than the average fitness of the group, the number of individuals using that strategy will increase and it will not be invaded by other strategies [29].
Threat intelligence providers or individual users with bounded rationality independently participate in repeated games in a secure and trustworthy threat intelligence sharing platform supported by blockchain technology. As per the preceding hypothesis, at the initial stage of the game, each participating member chooses the probability of sharing to be x and the probability of not sharing to be 1 x . Under the condition that node 2 chooses to share with a probability of x and not share with a probability of 1 x , the expected payoff for node 1 choosing to share is:
U 1 = x [ r θ r l n ( 1 + C ) + ω l n ( 1 + C ) γ ] + ( 1 x ) [ r θ r l n ( 1 + C ) γ ]
The expected payoff for node 1 choosing not to share is:
U 2 = x [ ω l n ( 1 + C ) μ ] + ( 1 x ) [ k l n ( 1 + C ) ]
The expected payoff for node 1 is:
U ¯ = x U 1 + ( 1 x ) U 2
From Equations (3)–(5), we can obtain the replicator dynamics equation:
F ( x ) = x ( 1 x ) [ x ( k l n ( 1 + C ) + μ ) + r θ r l n ( 1 + C ) k l n ( 1 + C ) γ ]
When F ( x ) = 0 , the steady states of the participating members in the strategy evolution process can be obtained, then Equation (6) has at most three steady states, i.e.:
P 1 * = 0
P 2 * = 1
P 3 * = ( k r θ r ) l n ( 1 + C ) + γ k l n ( 1 + C ) + μ
Equations (7)–(9) are the equilibrium points of the replicator dynamics equation. When the equilibrium points satisfy F ( P * ) < 0 , it becomes a stable state and the corresponding strategy is referred to as the evolutionarily stable strategy (ESS). Participating members who deviate from ESS will be eliminated or assimilated in the population of shared members.
Substituting Equations (7)–(9) into the derivative of the replicator dynamics Equation (6), we obtain:
P ( 0 ) = φ Δ γ
P ( 1 ) = ( φ Δ γ ) ( Δ + μ )
P ( η ) = ( Δ φ + γ ) ( 1 Δ φ + γ Δ + μ )
(where Δ = k l n ( 1 + C ) ; φ = r θ r l n ( 1 + C ) ; η = ( k r θ r ) l n ( 1 + C ) + γ k l n ( 1 + C ) + μ )
Strategy case 1, if the incentive condition satisfies φ Δ γ > 0 and Δ + μ > 0 :
Since φ Δ γ > 0 (i.e., P ( 0 ) > 0 ), it indicates that P 1 * is not an evolutionarily stable strategy. When φ Δ γ > 0 and Δ + μ > 0 , we derive that P ( 1 ) < 0 , thus P 2 * is an evolutionarily stable strategy. In addition, we can also derive P ( η ) < 0 ; however, in this case, P 3 * < 0 , which violates the realistic condition that the probability of member participation in sharing should be in the range of 0 < P 3 * < 1 . As shown in Figure 1a, only P 2 * is the ESS in this scenario; regardless of the initial proportion of participating members in the game, the ESS of participating members will tend to choose to share threat intelligence as the evolution proceeds.
Strategy case 2, φ Δ γ < 0 and 0 < Δ + μ < ( φ Δ γ ) :
Based on φ Δ γ < 0 , i.e., P ( 0 ) < 0 , it is indicated that P 1 * is an evolutionarily stable strategy. From φ Δ γ < 0 and 0 < Δ + μ < ( φ Δ γ ) , it leads to the conclusion that P 2 * is not an evolutionarily stable strategy since P ( 1 ) > 0 . From φ Δ γ < 0 and 0 < Δ + μ < ( φ Δ γ ) , it is deduced that P ( η ) < 0 . However, at this time, P 3 * > 1 , and the range of probability values for P 3 * as a member participating in sharing should be 0 < P 3 * < 1 . Therefore, P ( η ) does not meet the realistic condition. As shown in Figure 1b, in this case, only P 1 * is an evolutionarily stable strategy. Regardless of the proportion selected for sharing in the initial members, with the evolution of time, the evolutionarily stable strategy of participating members will tend to choose not to participate in threat intelligence sharing, which should be avoided.
Strategy case 3, when φ Δ γ < 0 and ( φ Δ γ ) < Δ + μ :
Based on φ Δ γ < 0 , i.e., P ( 0 ) < 0 , it is indicated that P 1 * is an evolutionarily stable strategy. From φ Δ γ < 0 and ( φ Δ γ ) < Δ + μ , it is derived that P ( 1 ) < 0 , so P 2 * is also an evolutionarily stable strategy. From φ Δ γ < 0 and ( φ Δ γ ) < Δ + μ , it is concluded that P ( η ) > 0 , so P ( η ) is not an evolutionarily stable strategy. This shows that under the initial conditions of φ Δ γ < 0 and ( φ Δ γ ) < Δ + μ , the evolutionary stable strategy depends on the size of the proportion P of the participating members who choose to share at the beginning. As shown in Figure 1c, if 0 < P < Δ φ + γ Δ + μ , with the passage of time, members continuously imitate and select strategies that are beneficial to themselves. The final evolutionary result is that all members tend to no longer participate in threat intelligence sharing, that is, reaching the P 1 * stable state. If Δ φ + γ Δ + μ < P < 1 , with the passage of time, the final evolutionary result is that all members tend to participate in threat intelligence sharing, that is, reaching the P 2 * stable state.
The above analysis shows that different sharing strategies will lead to different evolutionary outcomes and we quantify the key strategy parameters that affect the evolutionary outcomes. In the first case of strategy, the incentive strategy will evolve into the whole tendency to participate in sharing, regardless of the initial proportion of group members who choose to participate in sharing. Although the main purpose of the incentive mechanism is to encourage members to actively participate in threat intelligence sharing, if members share too much, it indicates that the difficulty of obtaining rewards is too low and the dynamic parameter γ needs to be adjusted for cost control, otherwise it will cause inflation of the integral reward. According to the laws of economics, excessive inflation of integral rewards will also undermine the enthusiasm for sharing. In the second case of strategy, the incentive strategy cannot offset the sharing cost of members and the punishment for members who do not share is insufficient, leading to mutual imitation and learning in the group and ultimately everyone chooses to free ride and is unwilling to share. When designing incentive strategies, the threat intelligence sharing platform should avoid designing strategies similar to the second strategy case. The third strategy case of stable evolutionary is determined by the initial proportion p of members participating in sharing, which is in an uncertain state and is not conducive to active regulation. Therefore, the use of incentive strategies should also avoid this situation.

3.3. Threat Intelligence Sharing Incentives

In order to achieve incentive effects, the initial parameter setting of the incentive mechanism B-CTISM in terms of incentives or penalties should adopt the interval of strategy case 1: φ Δ γ > 0 and Δ + μ > 0 . Within this interval, the evolutionary stable strategy of the shared platform is to participate in sharing. Nodes that choose not to participate in sharing will ultimately be assimilated or will exit the shared platform under the incentive mechanism, effectively curbing the behavior of selfish nodes free riding.
Under this interval, dynamic adjustment of the parameters is needed to maintain the optimal shared incentive environment. Among them, the adjustment of parameter γ can control the sharing incentive or cost; the cost-balancing parameter θ adaptively adjusts according to the time to reach sharing saturation in the previous game stage. When the time to reach sharing saturation in the previous phase is too long, it is presumed that the participating members share negatively at this time. At this time, reducing θ can lower the level of cost and help promote sharing. When the time to reach shared saturation in the previous stage is too short, it indicates that the sharing heat is too high. At this time, the cost-balancing parameter θ should be increased. Doing so has two benefits: (1) increasing the level of cost neutrality will prevent low-level or new accounts from posting spam intelligence at low cost; (2) preventing inflation of score currency while improving the quality of cyber threat intelligence on the sharing platform.
Furthermore, given the timely nature of cyber threat intelligence, the B-CTISM mechanism punishes negative accounts that do not participate in sharing within a designated time frame to discourage “free-riding” behavior and to ensure high-quality shared information. Each node in the sharing mechanism should contribute to the best of its ability. Nodes that do not participate in sharing for an extended period reduce the effectiveness of the sharing community and the sharing system’s efficiency, thereby affecting the effectiveness of the entire sharing mechanism. Consequently, to ensure the integrity and effectiveness of shared information, imposing a certain degree of punishment on uncooperative nodes is necessary.
In the B-CTISM incentive mechanism, participating nodes such as organizations or individuals can invoke smart contracts through the software development kit (SDK) to obtain feedback on sharing behavior, as shown in Figure 2. The parameter interval setting of strategy case 1 ensures that the platform evolves a stable strategy for participation in sharing. Under this condition, the smart contract dynamically adjusts the cost-balancing parameter θ and the dynamic parameter γ to maintain the proportion of nodes participating in sharing in an appropriate interval during repeated gaming. The shared incentives or costs processed by the smart contract are stored on the blockchain and fed back to the client and the points of participating nodes change accordingly. The information of points stored to the blockchain cannot be tampered with and the execution of smart contracts is triggered entirely by actions, which can well alleviate the lack of trust of sharing members to the sharing platform and other sharing members.

4. Threat Intelligence Sharing Incentive Smart Contracts

4.1. Smart Contract Algorithm

Smart contracts are essentially traditional contract contents encoded into a program that can be automatically executed on the blockchain through technologies such as algorithms and program coding. They add programmable features to the blockchain while retaining its decentralized and tamper-proof characteristics; blockchain automatically processes digital assets through the invocation of smart contracts and the triggering of events [30,31]. Applying the threat intelligence sharing incentive mechanism B-CTISM in the form of smart contracts to blockchain-based sharing platforms can automatically trigger the execution of incentive strategies, enabling reliable and secure threat intelligence sharing and exchange in an environment without trusted third parties. This mechanism can incentivize and punish participating members during sharing, thereby improving the security and credibility of the sharing platform.
In order to ensure the benign development of the information sharing community and avoid “free-riding” behavior [32] of shared nodes that only use information without sharing, this paper proposes Algorithm 1: SGM (sheared game mechanism). The input of the algorithm is the incentive policy parameters under the strategy case 1 condition, and the output is the feedback of sharing behavior, that is, obtaining integral rewards or needing to pay share costs. Lines 1–3 obtain the node’s CA certificate and verify its validity. Only the node account that has registered and obtained a valid certificate can participate in the platform’s threat intelligence sharing, which can effectively prevent attackers from attacking through a large number of fake accounts. Lines 5–7 calculate the cost-balancing parameter θ in this stage by using the formula θ = N t , where N is a constant parameter and t is the time required to achieve sharing saturation in the previous game stage. The magnitude of the N value varies according to the required time to reach shared saturation. For example, when we want the time to reach saturation in a round of the game to remain as close as possible to t and θ to remain as close as possible to α , the value of N is best taken to be around α × t . When t in the previous stage is large, θ is correspondingly reduced and the account level of cost balance decreases, i.e., the level of accounts that obtain incentive rewards instead of requiring costs changes in the opposite direction with the size of t . Lines 8–15 calculate the current participation ratio of sharing members and compare it with the set threshold P * . When the number of sharers is small, extra rewards are given for sharing. Otherwise, adjust γ to impose costs on sharing. However, if the number of participants exceeds the sharing saturation threshold, the participation number is directly reset to enter the next game round.
Algorithm 1 shared game mechanism
Input:
    Parameters of the incentive strategy for the strategy case one condition
Output:
    Threat intelligence sharing feedback, i.e., incentives or costs
1:  function SGM(_id, parameters of the first strategy)
2:          cert_data = load_certificate(signcert)  //Get certificates
3:          Validate(cert_data)  //Calibration certificate
4:          if true then  //Authentication passed
5:                  R = getUserInformation.rank  //Node account level
6:                  S_last = getPreviousparameter.s  //cost-balancing parameter S
7:                  T = getPreviousparameter.t  //Shared saturation time
8:                  S_new = N/T
9:                  P = ParticipateSharingCnt  //UsersCnt
10:                if P < P* then
11:                       feedback = IncentiveReward  //This is motivation at this moment
12:                 else if P > SaturationRate then
13:                       k = k + 1  //Gaming rounds self-added
14:                       ParticipateSharingCnt = 0
15:                  end if
16:                  feedback = Cost  //This is cost at this moment
17:                  return feedback
18:      end if
19: end function
In the strategic assumption of the B-CTISM mechanism, members who engage in negative sharing should be punished with point deductions. Threat intelligence has the characteristics of short timeliness, fast changes, and large daily additions [30]. As time goes by, the value of the provided information will gradually decrease. Therefore, for inactive members who do not participate in information sharing for more than the set time range T, corresponding point deductions μ should be given as punishment. The algorithm for this, known as Algorithm 2 NSP (negative shared punishment), is shown below:
Algorithm 2 negative shared punishment
Input:
    Node certificate,signcert   //Node identity certificate
    Node account rank,rank  //Node account level
    Integral count,Count  //Nodes points
    Last shared time,last_time   //Last shared time
Output:
    new_count  //Updated node points
1: function NSP(signcert, rank, Count,last_time)
2:        setClock()  //Set the clock to trigger automatically
3:        time = new Date()  //Current time
4:        for id in id_list do
5:                 info = getUserInfo(id)  //Get node information
6:                 if time − last_time > T then
7:                         u = (time − last_time) − T
8:                         new_count = count − u
9:                 else
10:                         u = 0
11:                         new_count = count − u
12:               end if
13:               return new_count
14:        end for
15: end function

4.2. Smart Contracts Deployment

Smart contracts in Ethereum run in the Ethereum virtual machine (EVM); there is no explicit way to measure the CPU performance of the EVM. Smart contracts in Ethereum require ether to pay for fuel costs (gas), which are paid to miners who pack blocks and process transactions. Ether (ETH) is required to deploy a contract, send a transaction, or invoke a contract method. This study uses the Truffle framework for deployment testing of SGM and NSP algorithms; we used a computer with an Intel (R) Core (TM) i5-8250U CPU and 24 GB RAM running the Windows 11 operating system for the experiments. The software used are Truffle v5.5.6, Ganache v2.5.4, and the Chai testing framework.
Table 2 shows the details of deploying smart contracts SGM and NSP, including the deployment cost of gas and the running time when writing test cases for testing through the Chai testing framework; we can see that the cost of gas and the running time of SGM are slightly higher than those of NSP. By deploying the incentive strategy derived from evolutionary game theory analysis onto the blockchain by means of smart contracts, the tamper-proof characteristics of the blockchain and the ability to automatically execute the conditions triggered by smart contracts can enable the members of the parties involved in sharing to establish trust directly between the two sides of the transaction without the intervention and supervision of a third party.

5. Experiment Analysis

In order to verify the effectiveness of B-CTISM, simulation experiments are conducted by MATLAB to validate the effect of the proposed incentive strategy and mechanism regulation in Part III. A control variable approach is used to verify whether the incentive strategies of strategy case 1, strategy case 2, and strategy case 3 conform to the predicted evolution and the effect of the dynamically regulated parameters on the evolution of sharing, respectively. The experimental results reflect the role of the B-CTISM mechanism in facilitating cyber threat intelligence sharing.

5.1. ESS Validation Analysis

In Section 3, the solution of the replicator dynamic equation for evolutionary game theory is used to obtain the evolutionarily stable strategy (ESS). In order to satisfy the conditions of the three strategy cases, we have set the parameters listed in Table 3 for evolutionary simulation testing:
The evolution simulation experiment is carried out by MATLAB and the evolution curve is shown in Figure 3 for the excitation parameter setting of strategy case 1.
Based on the parameter settings mentioned above, φ Δ γ = 0.5 ; Δ + μ = 10.5 , which satisfies the conditions of strategy 1, i.e., φ Δ γ > 0 and Δ + μ > 0 . As shown in Figure 3, regardless of the initial proportion of people participating in community sharing, the evolutionarily stable strategy (ESS) for the entire system will tend towards full participation, proving that under this condition, P 2 * = 1 is the ESS of the intelligence sharing platform.
According to the above parameter settings, φ Δ γ = 3 ; Δ + μ = 1.2 , which satisfies the conditions φ Δ γ < 0 and 0 < Δ + μ < ( φ Δ γ ) for strategy case 2. As can be seen from Figure 4, no matter what the proportion P of the initial number of people involved in sharing in the community is, eventually the evolutionary stabilization strategy of the entire system will tend to be all non-participating; the experiment proves that P 1 * = 0 is the evolutionary stable strategy of the threat intelligence sharing platform.
For Scenario 3, the parameter requirements are φ Δ γ < 0 and ( φ Δ γ ) < Δ + μ , φ Δ γ = 4.5 , and Δ + μ = 10.5 . According to Table 1, we can calculate the evolutionary stable point P 3 * = ( k r θ r ) ln ( 1 + c ) + γ k ln ( 1 + c ) + μ 0.429 . When the initial participation rate of the shared members P satisfies 0 < P < 0.429 , as time goes on, the evolutionary stable strategy of the cyber threat intelligence community tends to be not to participate in sharing; conversely, if P satisfies 0.429 < P < 1 , then the evolutionary stable strategy of the cyber threat intelligence sharing community will eventually tend to participate in sharing. This conclusion can be intuitively reflected in Figure 5. When the initial sharing ratio P is less than 0.429, the sharing community will evolve towards free riding and not sharing. When the initial sharing ratio is greater than 0.429, the sharing community will evolve towards all participating in sharing. The experimental results confirm that P 0 * = 1 and P 1 * = 0 are the evolutionary stable strategies for the threat intelligence sharing community.

5.2. Incentive Validation Analysis

In order to promote and regulate cyber threat intelligence sharing, the paper sets dynamic incentive parameter γ and cost-balancing parameter θ in the incentive strategy. When the proportion of members participating in sharing within the sharing community is below the threshold P * (can be set as appropriate), reducing dynamic incentive parameter γ can stimulate sharing; when the proportion of members participating in sharing within the sharing community is above the threshold P * , increasing dynamic incentive parameter γ and imposing costs on sharing can prevent score inflation. The cost-balancing parameter θ = N t , where N is a configurable constant and t is the time at which sharing saturation was reached in the previous game stage. If the sharing saturation state was reached too quickly in the previous stage, the cost-balancing parameter θ in this stage would be higher than in the previous stage, thus controlling the shared cost and improving the quality of threat intelligence. Using the control variable method, the paper changes the dynamic incentive parameter γ or the cost-balancing parameter θ while controlling other parameters unchanged under the initial conditions of Scenario 1 to test whether the incentive mechanism can actively regulate the evolutionary process of the threat intelligence sharing community.
The evolution curve is shown in Figure 6 by evolution simulation experiments using the above parameters.
From Figure 6a, it can be seen that, with the initial participation rate of shared members P = 0.2 , other parameters conforming to Scenario 1 remain unchanged and, as the cost-balancing parameter θ increases, the time for the community to tend towards sharing saturation increases. In the test data of Table 4, when the cost-balancing parameter θ increases to 4, i.e., when the member level r needs to reach level 4 to neutralize the publication cost, the community evolves towards not participating in sharing. When the dynamic incentive parameter γ is greater than zero, it is the sharing cost and vice versa for the sharing incentive. From Figure 6b, it can be seen that when γ increases, the time required for the sharing community to achieve sharing saturation becomes longer and longer. When the cost is too high, i.e., when γ = 6 under the test data condition in Table 4, the evolutionary stable strategy of the sharing community becomes non-participatory in sharing. This experiment verifies that the dynamic incentive parameter γ and cost-balancing parameter θ can effectively regulate the evolutionary process of the threat intelligence sharing community.

5.3. Comparative Analysis

In response to the general lack of effective incentive mechanisms for sharing communities, the literature [25] also proposed an evolutionary game-theoretic-based incentive model for data sharing, EGDSI, which shares the same goals as the B-CTISM mechanism in terms of safely and efficiently promoting user participation in sharing. However, the EGDSI model compares with the B-CITSM mechanism in that (1) the potential risk associated with low-level accounts has been overlooked. Although the EGDSI classifies users into levels 1–5 in the community, it is only used to calculate sharing benefits, with higher levels yielding higher benefits. The B-CTISM introduces a cost-balancing parameter to effectively solve this problem. (2) In the EGDSI model, no measures are taken for bot users who share negatively, but the timeliness of information should be taken into account for data sharing; when users do not participate in sharing for a long time, the B-CTISM model will take appropriate punitive measures. The scaling factor of sharing benefit and data cost defined in the EGDSI model have the same meaning as the scaling factor of defense benefit ω and sharing benefit U in this paper. When setting ω = 4 , U = 10 and the other parameters of the B-CTISM mechanism adopt the interval of strategy case 1; the sharing strategy evolution curves of the two mechanisms over time are shown in Figure 7.
As can be seen from Figure 7, the B-CTISM model can promote members’ participation in sharing and can reach sharing saturation faster with the same initial participation rate. Moreover, the B-CTISM model incorporates risk concerns related to low-level accounts and defines penalties for bot accounts with negative sharing. This improves the security and encourages the promotion of sharing, making the B-CTISM mechanism more effective.

6. Conclusions

In the process of sharing cyber threat intelligence, there are problems of lack of trust among sharing parties, low participation enthusiasm, and a lack of reasonable and efficient incentive mechanisms. In this paper, we propose a blockchain-based cyber threat intelligence sharing mechanism that can promote cyber threat intelligence sharing, curb free-riding behavior of participating members, encourage organizations and individuals to actively participate in sharing, and improve the enthusiasm and efficiency of sharing. In addition, based on blockchain, cyber threat intelligence sharing solves the trust gap between sharing members, no longer requires the participation of trusted third parties, and achieves secure and efficient sharing of cyber threat intelligence. Finally, through simulation experiments and comparative analysis with the EGDSI model, the feasibility and effectiveness of the proposed incentive mechanism are verified. It can effectively promote members’ active participation in cyber threat intelligence sharing and can further promote organizations or individuals’ sharing and exchange of cyber threat intelligence by adjusting the sharing status of the platform through dynamically set parameters, breaking information silos.
In practice, although the incentive mechanism proposed in the paper is a decentralized sharing scheme based on blockchain, it still requires initial initiators and maintainers and smart contracts still face many challenges and limitations during deployment and operation, such as contract code security, effectiveness, and stability maintenance of the incentive mechanism, which require certain technical preparations and human resources. The sharing of cyber threat intelligence may also face ethical and legal issues, such as the need to weigh the relationship between cybersecurity and privacy protection and the need to comply with relevant laws, regulations, and rules to ensure compliance. Moreover, there are some shortcomings in this research. In terms of sharing incentives, the evaluation of the value of cyber threat intelligence provided to participating members is not sufficient and accurate scoring reward feedback is not given based on the quality of information. Therefore, future research can explore how to evaluate the value of cyber threat intelligence to support the design of more accurate incentive feedback mechanisms.

Author Contributions

All authors contributed to the manuscript and discussed the results. Conceptualization, X.M., D.Y. and Y.D.; Software, Y.D., L.L. and H.L.; Formal analysis, Y.D.; Data curation, X.M.; Writing—original draft, X.M.; Visualization, W.N.; Funding acquisition, Y.D. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by National Engineering Research Center of Classified Protection and Safeguard Technology for Cybersecurity, grant number C21640.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

All data were presented in main text.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. van Haastrecht, M.; Golpur, G.; Tzismadia, G.; Kab, R.; Priboi, C.; David, D.; Racataian, A.; Baumgartner, L.; Fricker, S.; Ruiz, J.F. A Shared Cyber Threat Intelligence Solution for SMEs (vol 10, 2913, 2021). Electronics 2022, 11, 349. [Google Scholar] [CrossRef]
  2. Cha, J.; Singh, S.K.; Pan, Y.; Park, J.H. Blockchain-based cyber threat intelligence system architecture for sustainable computing. Sustainability 2020, 12, 6401. [Google Scholar] [CrossRef]
  3. Brown, R.; Lee, R.M. The Evolution of Cyber Threat Intelligence (cti): 2019 Sans Cti Survey. SANS Institute. 2019. Available online: https://www.sans.org/white-papers/38790/ (accessed on 12 July 2021).
  4. Riesco, R.; Larriva-Novo, X.; Villagrá, V.A. Cybersecurity threat intelligence knowledge exchange based on blockchain: Proposal of a new incentive model based on blockchain and Smart contracts to foster the cyber threat and risk intelligence exchange of information. Telecommun. Syst. 2020, 73, 259–288. [Google Scholar] [CrossRef]
  5. Lin, S.; Yin, J.; Pei, Q.; Wang, L.; Wang, Z. A Nested Incentive Scheme for Distributed File Sharing Systems. In Proceedings of the 2021 IEEE International Conference on Smart Internet of Things (SmartIoT), Jeju, Republic of Korea, 13–15 August 2021; pp. 60–65. [Google Scholar]
  6. Wagner, T.D.; Mahbub, K.; Palomar, E.; Abdallah, A.E. Cyber threat intelligence sharing: Survey and research directions. Comput. Secur. 2019, 87, 101589. [Google Scholar] [CrossRef]
  7. Tounsi, W.; Rais, H. A survey on technical threat intelligence in the age of sophisticated cyber attacks. Comput. Secur. 2018, 72, 212–233. [Google Scholar] [CrossRef]
  8. Sakellariou, G.; Fouliras, P.; Mavridis, I.; Sarigiannidis, P. A Reference Model for Cyber Threat Intelligence (CTI) Systems. Electronics 2022, 11, 1401. [Google Scholar] [CrossRef]
  9. Saxena, R.; Gayathri, E. Cyber threat intelligence challenges: Leveraging blockchain intelligence with possible solution. Mater. Today Proc. 2022, 51, 682–689. [Google Scholar] [CrossRef]
  10. Schlette, D.; Caselli, M.; Pernul, G. A comparative study on cyber threat intelligence: The security incident response perspective. IEEE Commun. Surv. Tutor. 2021, 23, 2525–2556. [Google Scholar] [CrossRef]
  11. Strom, B.E.; Applebaum, A.; Miller, D.P.; Nickels, K.C.; Pennington, A.G.; Thomas, C.B. Mitre att&ck: Design and philosophy. In Technical Report; The MITRE Corporation: Bedford, MA, USA, 2018. [Google Scholar]
  12. Lin, Y.; Liu, P.; Wang, H.; Wang, W.; Zhang, Y. Overview of Threat Intelligence Sharing and Exchange in Cybersecurity. J. Comput. Res. Dev. 2020, 57, 2052. [Google Scholar] [CrossRef]
  13. Amaro, L.J.B.; Azevedo, B.W.P.; de Mendonca, F.L.L.; Giozza, W.F.; Albuquerque, R.D.O.; Villalba, L.J.G. Methodological Framework to Collect, Process, Analyze and Visualize Cyber Threat Intelligence Data. Appl. Sci. 2022, 12, 1205. [Google Scholar] [CrossRef]
  14. Purohit, S.; Neupane, R.; Bhamidipati, N.R.; Vakkavanthula, V.; Wang, S.; Rockey, M.; Calyam, P. Cyber threat intelligence sharing for co-operative defense in multi-domain entities. IEEE Trans. Dependable Secur. Comput. 2022, 1–18. [Google Scholar] [CrossRef]
  15. Yli-Huumo, J.; Ko, D.; Choi, S.; Park, S.; Smolander, K. Where Is Current Research on Blockchain Technology?—A Systematic Review. PLoS ONE 2016, 11, e0163477. [Google Scholar] [CrossRef] [PubMed]
  16. Nakamoto, S. A Peer-to-Peer Electronic Cash System. 2008. Available online: https://bitcoin.org/bitcoin.pdf (accessed on 3 April 2023).
  17. Sofia, D.; Lotrecchiano, N.; Trucillo, P.; Giuliano, A.; Terrone, L. Novel Air Pollution Measurement System Based on Ethereum Blockchain. J. Sens. Actuator Netw. 2020, 9, 49. [Google Scholar] [CrossRef]
  18. Aladhadh, S.; Alwabli, H.; Moulahi, T.; Al Asqah, M. BChainGuard: A New Framework for Cyberthreats Detection in Blockchain Using Machine Learning. Appl. Sci. 2022, 12, 12026. [Google Scholar] [CrossRef]
  19. Szabo, N. Smart contracts: Building blocks for digital markets. EXTROPY J. Transhumanist Thought 1996, 18, 28. [Google Scholar]
  20. Alabdulatif, A.; Al Asqah, M.; Moulahi, T.; Zidi, S. Leveraging Artificial Intelligence in Blockchain-Based E-Health for Safer Decision Making Framework. Appl. Sci. 2023, 13, 1035. [Google Scholar] [CrossRef]
  21. Liang, X.; Yan, Z.; Kantola, R. GAIMMO: A Grade-Driven Auction-Based Incentive Mechanism with Multiple Objectives for Crowdsourcing Managed by Blockchain. IEEE Internet Things J. 2022, 9, 17488–17502. [Google Scholar] [CrossRef]
  22. Ai, Z.; Liu, Y.; Wang, X. ABC: An auction-based blockchain consensus-incentive mechanism. In Proceedings of the 2020 IEEE 26th International Conference on Parallel and Distributed Systems (ICPADS), Hong Kong, China, 2–4 December 2020; pp. 609–616. [Google Scholar]
  23. Cheng, G.; Deng, S.; Xiang, Z.; Chen, Y.; Yin, J. An auction-based incentive mechanism with blockchain for iot collaboration. In Proceedings of the 2020 IEEE International Conference on Web Services (ICWS), Beijing, China, 19–23 October 2020; pp. 17–26. [Google Scholar]
  24. Ding, X.; Guo, J.; Li, D.; Wu, W. An incentive mechanism for building a secure blockchain-based internet of things. IEEE Trans. Netw. Sci. Eng. 2020, 8, 477–487. [Google Scholar] [CrossRef]
  25. Zhang, B.J.; Guo, K.W.; Hu, K. Research on Data Sharing Incentive Mechanism Based on Smart Contract. Comput. Eng. 2022, 48, 37–44. [Google Scholar] [CrossRef]
  26. Motepalli, S.; Jacobsen, H.-A. Reward mechanism for blockchains using evolutionary game theory. In Proceedings of the 2021 3rd Conference on Blockchain Research & Applications for Innovative Networks and Services (BRAINS), Paris, France, 27–30 September 2021; pp. 217–224. [Google Scholar]
  27. Tanimoto, J. Fundamentals of Evolutionary Game Theory and Its Applications; Springer: Cham, Switzerland, 2015. [Google Scholar]
  28. Herbert Gintis Game Theory Evolving: A Problem-Centered Introduction to Modeling Strategic Interaction; China Renmin University Press: Beijing, China, 2015.
  29. Zhang, W.Y. Game Theory and Intelligence Economics; Shanghai People’s Publishing House: Shanghai, China, 1990. [Google Scholar]
  30. Mohanta, B.K.; Panda, S.S.; Jena, D. An Overview of Smart Contract and Use Cases in Blockchain Technology. In Proceedings of the 2018 9th International Conference on Computing, Communication and Networking Technologies (ICCCNT), Bengaluru, India, 10–12 July 2018; pp. 1–4. [Google Scholar]
  31. Zheng, Z.; Xie, S.; Dai, H.; Chen, X.; Wang, H. An Overview of Blockchain Technology: Architecture, Consensus, and Future Trends. In Proceedings of the 2017 IEEE International Congress on Big Data (BigData Congress), Honolulu, HI, USA, 25–30 June 2017; pp. 557–564. [Google Scholar]
  32. Al-Ibrahim, O.; Mohaisen, A.; Kamhoua, C.; Kwiat, K.; Njilla, L. Beyond Free Riding: Quality of Indicators for Assessing Participation in Information Sharing for Threat Intelligence. arXiv 2017, arXiv:1702.00552. [Google Scholar] [CrossRef]
Figure 1. Evolutionary phase diagram (a) strategy case 1 evolutionary phase diagram; (b) strategy case 2 evolutionary phase diagram; (c) strategy case 3 evolutionary phase diagram.
Figure 1. Evolutionary phase diagram (a) strategy case 1 evolutionary phase diagram; (b) strategy case 2 evolutionary phase diagram; (c) strategy case 3 evolutionary phase diagram.
Electronics 12 02454 g001
Figure 2. B-CTISM incentive mechanism shared-regulation process.
Figure 2. B-CTISM incentive mechanism shared-regulation process.
Electronics 12 02454 g002
Figure 3. Strategy case 1 evolutionary case validation.
Figure 3. Strategy case 1 evolutionary case validation.
Electronics 12 02454 g003
Figure 4. Strategy case 2 evolutionary case validation.
Figure 4. Strategy case 2 evolutionary case validation.
Electronics 12 02454 g004
Figure 5. Strategy case 3 evolutionary case validation.
Figure 5. Strategy case 3 evolutionary case validation.
Electronics 12 02454 g005
Figure 6. (a) Cost neutralization parameter o-regulation validation analysis; (b) dynamic parameter y-regulation validation analysis.
Figure 6. (a) Cost neutralization parameter o-regulation validation analysis; (b) dynamic parameter y-regulation validation analysis.
Electronics 12 02454 g006
Figure 7. Evolutionary curves of sharing strategies over time for two mechanisms.
Figure 7. Evolutionary curves of sharing strategies over time for two mechanisms.
Electronics 12 02454 g007
Table 1. Matrix of benefits of participating nodes.
Table 1. Matrix of benefits of participating nodes.
Participating Node 1Participating Node 2
SharedUnshared
shared r θ r l n ( 1 + C ) + ω l n ( 1 + C ) γ r θ r l n ( 1 + C ) γ
r θ r l n ( 1 + C ) + ω l n ( 1 + C ) γ ω l n ( 1 + C ) μ
unshared ω l n ( 1 + C ) μ k l n ( 1 + C )
r θ r l n ( 1 + C ) γ k l n ( 1 + C )
Table 2. Smart contract deployment.
Table 2. Smart contract deployment.
Smart ContractLanguageCompiler VersionGas Cost (gwei)Run Time (ms)
SGMSolidity0.8.13244,520765
NSPSolidity0.8.1327,513593
Table 3. Evolutionary stabilization strategy simulation test parameters.
Table 3. Evolutionary stabilization strategy simulation test parameters.
r θ U ω γ μ k
Strategy Case 142101280.25
Strategy Case 2424140.20.25
Strategy Case 342101780.25
Table 4. Incentive mechanism test parameters.
Table 4. Incentive mechanism test parameters.
Other Parameters θ Other Parameters γ
Same as strategy
case 1
0Same as strategy
case 1
−8
1−4
20
33
46
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Ma, X.; Yu, D.; Du, Y.; Li, L.; Ni, W.; Lv, H. A Blockchain-Based Incentive Mechanism for Sharing Cyber Threat Intelligence. Electronics 2023, 12, 2454. https://doi.org/10.3390/electronics12112454

AMA Style

Ma X, Yu D, Du Y, Li L, Ni W, Lv H. A Blockchain-Based Incentive Mechanism for Sharing Cyber Threat Intelligence. Electronics. 2023; 12(11):2454. https://doi.org/10.3390/electronics12112454

Chicago/Turabian Style

Ma, Xingbang, Dongsheng Yu, Yanhui Du, Lanting Li, Wenkai Ni, and Haibin Lv. 2023. "A Blockchain-Based Incentive Mechanism for Sharing Cyber Threat Intelligence" Electronics 12, no. 11: 2454. https://doi.org/10.3390/electronics12112454

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop