Next Article in Journal
Tropical Wood Dusts—Granulometry, Morfology and Ignition Temperature
Next Article in Special Issue
Block Data Record-Based Dynamic Encryption Key Generation Method for Security between Devices in Low Power Wireless Communication Environment of IoT
Previous Article in Journal
Effect of Acetone Content on the Preparation Period and Curing/Pyrolysis Behavior of Liquid Polycarbosilane
Previous Article in Special Issue
Practical I-Voting on Stellar Blockchain
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Coordination Technique for Improving Scalability of Byzantine Fault-Tolerant Consensus

1
Department of Computer Science & Engineering, Sogang University, 915 Ricci Hall, 35, Baekbeom-ro, Mapo-gu, Seoul 04107, Korea
2
NonceLab. Inc., 802 Seoul Blockchain Center, 78, Mapo-daero, Mapo-gu, Seoul 04168, Korea
3
Department of Software Engineering, CAIIT, Jeonbuk National University, 567 Baekje-daero, deokjin-gu, Jeonju-si, Jeollabuk-do 54896, Korea
4
Department of Computer Science & Engineering, Sogang University, 915A Ricci Hall, 35, Baekbeom-ro, Mapo-gu, Seoul 04107, Korea
*
Authors to whom correspondence should be addressed.
Appl. Sci. 2020, 10(21), 7609; https://doi.org/10.3390/app10217609
Submission received: 15 September 2020 / Revised: 22 October 2020 / Accepted: 24 October 2020 / Published: 28 October 2020
(This article belongs to the Special Issue Advances in Blockchain Technology and Applications II)

Abstract

:
Among various consensus algorithms, the Byzantine Fault Tolerance (BFT)-based consensus algorithms are broadly used for private blockchain. However, as BFT-based consensus algorithms are structured for all participants to take part in a consensus process, a scalability issue becomes more noticeable. In this approach, we introduce a consensus coordinator to execute a conditionally BFT-based consensus algorithm by classifying transactions. Transactions are divided into equal and unequal transactions. Moreover, unequal transactions are divided again and classified as common and trouble transactions. After that, a consensus algorithm is only executed for trouble transactions, and BFT-based consensus algorithms can achieve scalability. For evaluating our approach, we carried out three experiments in response to three research questions. By applying our approach to PBFT, we obtained 4.75 times better performance than using only PBFT. In the other experiment, we applied our approach to IBFT of Hyperledger Besu, and our result shows a 61.81% performance improvement. In all experiments depending on the change of the number of blockchain nodes, we obtained the better performance than original BFT-based consensus algorithms; thus, we can conclude that our approach improved the scalability of original BFT-based consensus algorithms. We also showed a correlation between performance and trouble transactions associated with transaction issue intervals and the number of blockchain nodes.

1. Introduction

Recently, blockchain is considered one of the core technologies that enable us to create a transparent world with guaranteeing the integrity and transparency of data [1]. Significantly, researchers have tried to apply the private blockchain technologies that facilitate data sharing among permissioned participants to diverse areas such as business [2,3], and computer science [4,5,6]. However, due to the scalability issue, the application of technologies is limited [7,8,9,10]. Although there are diverse factors associated with scalability such as network bandwidth and cryptographic algorithms, a consensus algorithm is a key factor that significantly influences the issue, ensuring that participants keep maintaining the same data in distributed environments [11,12].
Among the consensus algorithms for private blockchains, the Byzantine Fault Tolerance (BFT)-based consensus algorithms are broadly used. PBFT (Practical Byzantine Fault Tolerance) [13,14] is a popular and representative example. In the BFT-based consensus algorithm, the number of network communications among participants explodes with the increase in the number of participants. This is because all participants in the BFT-based consensus algorithm should be involved to complete the process for each transaction, and the process is even composed of diverse steps. This characteristic of the BFT-based consensus algorithm may raise the performance and scalability issue of the algorithm, as the more participants are newly involved, the slower the completion of the consensus process is [15].
There have been several previous studies on improving scalability of BFT-based consensus algorithms (see [16,17,18,19,20,21,22,23]). Some works tried to build sub-groups of blockchain nodes on a regular basis and reduced the number of network communications by executing two-step execution of the BFT-consensus algorithm: executing consensus inside sub-groups and conducting consensus between representatives of each sub-group. While this approach increased the PBFT algorithm’s scalability, it has a shortcoming that a new node cannot join any groups until new groups have been made. Other works also tried to optimize the number of communications by modifying the PBFT protocol with introducing a collector role or removing faulty nodes during consensus processes. Similarly, some works tried to reduce the number of prime node elections for minimizing the node election overhead. However, it is hard to expect the outstanding improvement of scalability through these approaches. In addition to these, other approach tried to deploy a new hardware-based BFT algorithm execution environment, but it has an apparent weakness that all nodes should prepare the specific hardware environments in advance.
To address the above issues, we propose a coordination technique for scalable Byzantine fault-tolerant consensus algorithms. The key idea is to introduce a Consensus Coordinator that controls conditional execution of a BFT-based consensus algorithm after classifying transactions of all nodes with respect to their equality. Our approach runs regularly associated with the block generation time interval, consisting of four steps. First, it starts with electing a prime node among all blockchain nodes for executing a BFT-based consensus algorithm and communicates with the consensus coordinator. Then, the consensus coordinator collects transactions from a transaction pool of each node. In the third step, the coordinator classifies transactions based on their equality and decides for executing a consensus algorithm. In the case that all transactions are equal, the coordinator lets the prime node execute block generation without executing a consensus algorithm, which is the completion of the synchronization in a blockchain network. When some transactions are not equal, the coordinator divides transactions into common and trouble transactions and requests the prime node to execute a BFT-based consensus algorithm only for trouble transactions. The prime node notifies agreed transactions to the coordinator. Finally, the coordinator sorts all common and agreed transactions by time order and requests the prime node to generate a new block containing all of processed transactions.
For the evaluation of our approach, we conducted three experiments for answering three research questions. We measured performance of the PBFT algorithm with and without our approach. In an experiment, the PBFT equipped with our approach obtained an average of 4.75 times better performance than only using the PBFT. In addition, we applied our approach to Hyperledger Besu using the IBFT (Istanbul Byzantine Fault Tolerance) consensus algorithm and showed a 61.81% performance improvement, compared to using only IBFT. We also presented correlation of performance and trouble transactions associated with transaction issue interval and the number of blockchain nodes. The contributions of our approach are summarized as follows:
  • We propose a novel coordination technique to improve the scalability and performance of consensus algorithms, which is applicable to diverse BFT-based consensus algorithms.
  • Our approach was implemented and applied to PBFT and Hyperleder Besu consensus algorithm, opened as an open source project for public access.
  • We performed three experiments in response to three research questions and showed the feasibility of our approach.
The remainder of this paper is organized as follows. Section 2 presents BFT-based consensus algorithms as background and related work for improving the scalability of BFT-based consensus algorithms. Section 3 proposes our coordination technique and explains four steps for achieving the scalable BFT-based consensus algorithm in detail. Section 4 presents the evaluation of our approach by responding to our three research questions. Section 5 concludes our paper and discusses future works.

2. Background and Related Work

This section presents BFT-based consensus algorithms and their characteristics as background and introduces some previous works regarding how to improve BFT-based consensus algorithms’ scalability.

2.1. Background: BFT-Based Consensus Algorithms

BFT (Byzantine Fault Tolerance)-based consensus algorithms indicate a group of consensus algorithms for resolving the byzantine general problem regarding how to achieve a consensus of data in an environment where normal and malicious nodes are mixed [24]. The representative example is PBFT (Practical Byzantine Fault Tolerance) [13,14] in the Hyperledger Fabric 0.6 (Hyperledger Fabric later changed the PBFT into Raft since version 2.0 [25]), and diverse variations of PBFT such as Tendermint in Cosmos [26], Hotstuff in Libra [27], and IBFT in Hyperledger Besu [28] are broadly used. BFT-based consensus algorithms’ characteristics are the fast finality of a transaction, which indicates that the transaction is immediately finalized once the transaction is issued by a client and validated by N = 3 f + 1 participants. However, in other consensus algorithms such as PoW and PoS, a client should wait until their transaction is contained in a new block after issuing transactions. In Bitcoin, for example, it takes 1 h to finalize transactions theoretically. In the worse case, it often takes more time when a block is forked. Despite the fast finality of BFT-based consensus algorithms, it is inevitable to decrease performance and scalability when the number of nodes is increased. This is because all participants should join the consensus process, and two over three nodes should agree with transactions by communicating with each other in four steps: pre-prepare, prepare, commit, and reply (Figure 1). Thus, this mechanism always causes the scalability issue depending on the number of nodes [15].

2.2. Related Work

Many approaches have been suggested to improve the scalability of BFT-based consensus algorithms. Most of their methods tried to reduce the number of network communications. To control the number of nodes participating in the consensus protocol, Feng et al. suggested the SDMA (Scalable Dynamic Multi-Agent)-PBFT approach that reduces the number of participants [16]. The approach builds sub-groups among the peers and elect an agent as a primary node in each sub-group. Then, it carries out the consensus process in sub-groups at first, and the second consensus process is performed only among the agents. While this approach increases the PBFT algorithm’s scalability by reducing communication paths from the established blockchain network, it has a shortcoming that a new node cannot join any previously established groups until new groups have been made.
Similar to Feng et al.’s approach, Luu et al. proposed SCP (Scalable Byzantine Consensus Protocol) by executing the first consensus algorithm within sub-groups and the second consensus algorithm among group leaders from a result of the first execution [17]. The approach builds sub-groups by generating a random group number based on their IP address, public key, and nonce, while Feng et al.’s approach builds sub-groups by making the spanning-tree from a root node. Although this research contributed to reducing communication paths, it still has a similar problem that new nodes cannot easily join a blockchain network as in the approach of Feng et al.
As another approach, some research tried to optimize the number of communications by introducing collector role or removing faulty nodes during consensus processes. Kotl et al. proposed a new BFT-based consensus protocol, named Zyzzyva, where the number of non-faulty nodes for PBFT adaptively changes from N = 3 f + 1 into N = 2 f + 1 when a faulty node is detected during a consensus process [18]. Gueta et al. suggested that SBFT (State-of-the-art Byzantine Fault Tolerant) [19] reduces the number of communication paths among nodes by collecting messages in a consensus process into two collector nodes and validates messages in limited places. Similar to Gueta et al.’s approaches, Jiang et al. suggested HSBFT (High Performance and Scalable Byzantine Fault Tolerance), which makes a prime node to play a collector role that collects all messages and validates them [20]. HSBFT has a prime node electing process based on a node stable table containing identity number, state, IP, and public key. Based on the table, HSBFT excludes unstable nodes and optimizes communication paths. Although three approaches reduce the number of participant nodes and communication paths, it is hard to expect an outstanding improvement regarding scalability or performance.
In addition, Lei et al. proposed RBFT (Reputation-based Byzantine Fault Tolerance) algorithm for reducing communication paths in a private blockchain [21]. Each blockchain node computes their reputation score based on evaluation of their behaviors (e.g., a good behavior for generating a new block) and the number of permitted votes of each node is different depending on the reputation score. Votes are used to decide pass of PBFT’s steps by checking if the number of votes is over a specific threshold. Although it can reduce the number of communications, only limited nodes can have an out-sized influence on the voting process.
Some research tries to minimize the number of prime node election processes for enhancing scalability. Gao et al. proposed a trust eigen-based PBFT consensus algorithm, which is called T-PBFT [22]. In the approach, they tried to minimize the number of the prime node elections based on each node’s trust evaluation. Before starting the PBFT consensus process, the proposed eigen trust model evaluates all nodes’ trust scores and makes a group called a primary group. Then, the consensus process is composed of two steps: (1) the consensus within the primary group; and (2) the consensus between the remaining nodes and the primary group. It can improve scalability by reducing change the proportion of the single primary node. However, its number of communications is the same as the PBFT, so it is hard to expect a distinct improvement of scalability.
A new hardware-based execution environment is also introduced for improving performance of the BFT-based consensus algorithm. Liu et al. proposed a hardware-based BFT algorithm execution environment, which is named Fast BFT [23]. All nodes use a hardware chip (e.g., Intel SGX) for using TEE (Trusted Execution Environment) to execute the consensus algorithm and the TEE supports public key operation (e.g., multi signatures) during the consensus process. They also suggested the new Fast BFT algorithm that reduces verification steps by collaborating with TEE. While they improved the BFT algorithm’s scalability, their assumptions that all nodes should be executed upon TEE must be their limitations.

3. A Coordination Technique for Scalable BFT Consensus

This section presents a coordination technique for achieving the scalability of BFT-based consensus algorithms. Our approach is composed of two parts: Our Coordination Technique and BFT-based Consensus Algorithm (Figure 2). The BFT-based Consensus Algorithm part located at the bottom of the figure indicates traditional BFT-based algorithms such as PBFT and IBFT. It is composed of one prime node that controls consensus process and other general nodes similar to general BFT-based blockchain platforms. Each node has BFT-Module controlling the consensus process and generating new blocks and Transaction Pool maintaining unconfirmed transactions. Thus, the BFT-Module accesses transactions of the transaction pool regularly and executes a BFT-based consensus algorithm. Once all nodes have been achieved, the consensus of transactions, the BFT-Module produces a new block after accumulating agreed transactions.
Our Coordination Technique part corresponds to the top of the figure, and it is also located to the part above the BFT-based Consensus Algorithm part. In the technique, we newly introduce the Consensus Coordinator that controls each node’s BFT-based consensus algorithms’ conditional execution depending on the equality of transactions. Our consensus coordination technique consists of four steps. (1) The prime node is elected among all participating nodes. (2) The coordinator collects all transactions that existed in the transaction pool of each node. (3) The coordinator checks the equality of transactions and classifies transactions into common and trouble transactions. For trouble transactions, the coordinator requests a prime node to execute a consensus algorithm and obtains agreed transactions. (4) The coordinator merges common and agreed transactions and requests the controller of all nodes to execute block generation with merged transactions. In the following subsections, the steps are described in more detail.

3.1. Step 1. Electing a Prime Node

The first step is to elect a prime node among all participant nodes. The elected prime node plays a role of interacting with a consensus coordinator. This election step runs each regular t time (we assumed that all nodes have the same time unit through a logical clock or a physical clock algorithm (e.g., [29,30,31])). Once all steps are completed, this prime node election step is carried out again. The algorithm of this step is shown in Algorithm 1.
Algorithm 1 [All Nodes] Electing Prime Node
1:
r a n d o m = Random( s e e d ) % N ( N o d e )
2:
if r a n d o m equals to N o d e i then
3:
     s i g p r i m e = Signature ( N o d e i , s e e d ) s k p r i m e
4:
    notify ( s i g p r i m e , p k p r i m e ) to C C
5:
end if
Electing the prime node starts with a s e e d which is a previous block hash value, and modulates the random number with the seed by total number of nodes N ( n o d e ) to get a prime node number. The hash value of a previous block must be the same throughout all nodes because they are already agreed in a previous round. In the case that the result r a n d o m from the random algorithm is equal to a unique number of each node N o d e i that is assigned before, a node is elected as a prime node. After signing its node number N o d e i and s e e d with its private key s k p r i m e , it notifies the consensus coordinator C C with the result of sign s i g p r i m e and its public key p k p r i m e .

3.2. Step 2. Collecting Transactions from Transaction Pool

Once the coordinator gets the prime node election notification from the prime node, it collects all transactions from each node’s transaction pool as the second step. This collection step is presented in Algorithm 2. The input parameters of this step are s i g p r i m e and p k p r i m e from the prime node. Then, the coordinator checks elected prime node’s validity by generating a r a n d o m number with the delivered s e e d and N o d e i (see Lines 3–5). If the generated r a n d o m is equals to N o d e i received from the prime node, the consensus coordinator requests all nodes to send all transactions accumulated in the transaction pool of each node between the previous time t i m e p to the current time t i m e c . When r a n d o m is different from N o d e i , the coordinator terminates all coordination steps of this round and waits for the next idle state.
Algorithm 2 [Coordinator] Collecting Transactions from TxPool
  • Inputs:
     s i g p r i m e , p k p r i m e
2:
Initialize:
     T x s { }
3:
N o d e i , s e e d = Signature ( s i g p r i m e ) p k p r i m e
4:
r a n d o m = R a n d o m ( s e e d ) % N ( N o d e )
5:
if r a n d o m equals to N o d e i then
6:
    for i=0, i N ( N o d e ) , i++ do
7:
         T x s i request N o d e i to send T x ( t i m e p t i m e c ) of T x P o o l i
8:
    end for
9:
else
10:
    Terminate
11:
end if

3.3. Step 3. Processing Equal/Unequal Transactions

Based on collected transactions from each node, this step decides the execution of the BFT-based consensus algorithm. Figure 3 shows processing steps for collected transactions. At first, the coordinator checks if all transactions collected from each node are equal to those from other nodes. Thus, when all transactions are equal, it executes the Handling Equal Transactions step, and then this third step is terminated. When transactions are not the same, it first classifies transactions into common and trouble transactions. The coordinator then requests the prime node to execute a consensus algorithm only for trouble transactions and gets agreed transactions. Finally, the coordinator merges and sorts common and agreed transactions in time order.

3.3.1. Step 3.1 Handling Equal Transactions

The coordinator performs this step if all transactions from each node are equal. Algorithm 3 shows detailed steps for handling equal transactions. To check transactions’ equality, a set of transactions from each node is converted into a hash function (see Lines 3–5) first, and their equality is compared. Due to the characteristic of the hash function, their hash values must be the same if all transactions are the same.
Algorithm 3 [Coordinator] Handling Equal Transactions
  • Inputs:
     T x s = { T x s 0 , T x s 1 , . . . T x s n }
2:
Initialize:
     T x s L i s t { }
3:
fori=0, i N ( N o d e ) , i++ do
4:
     T x s L i s t i ← Hash( T x s i )
5:
end for
6:
if All hashs of T x s L i s t are e q a u l then //Handle Equal Transactions
7:
     i s C o n f i r m e d , s i g p r i m e , p k p r i m e request N o d e p r i m e to confirm T x s 0
8:
    if i s C o n f i r m e d = = t r u e then
9:
        request All Nodes to generate a new block with T x s 0 and s i g p r i m e
10:
         t i m e p t i m e c
11:
    else
12:
        Terminate
13:
    end if
14:
else//For Unequal Transactions
15:
    handle Unequal Transactions( T x s )
16:
end if
If all transactions are the same, the coordinator requests the prime node N o d e P r i m e to confirm transactions T x s 0 and the prime node responses to the confirmation of transactions. This confirmation step is necessary for mutual trust between the coordinator and the prime node on integrity of transactions from the coordinator. The response from the prime node includes i s C o n f i r m e d , s i g p r i m e , and p k p r i m e . Among the responses, s i g p r i m e results from execution S i g n a t u r e ( T x s 0 ) s k p r i m e of the prime node (see Lines 7–8).
When the prime node confirms all equal transactions, the coordinator requests all nodes of blockchain network to generate a new block. Then, the controller receives the request and delegates the request to the BFT-Module to generate a new block. All nodes’ t i m e p is updated with the current t i m e c for designating the starting period for the next round (see Line 10). If the prime node does not confirm transactions, this round is terminated and t i m e p remains at the previous time. In addition, when some of the transactions are different, the coordinator performs the handle unequal transactions step presented in the next subsection.

3.3.2. Step 3.2 Handling Unequal Transactions

This step is executed by the coordinator when some of the transactions are not equal. This step is composed of three sub-steps: (1) classifying transactions; (2) executing a consensus algorithm; and (3) sorting all transactions.
(1) Classifying Transactions: In this step, the coordinator classifies all transactions into common transactions and uncommon transactions throughout all transactions from each node. For uncommon transactions, we rename them as trouble transactions. Algorithm 4 shows the classification step. The output of the classification is stored in L i s t c o m m and L i s t t r . The classification step is very intuitive. With the boolean flag i s C o m m o n , it iterates all transactions of each node and checks if a transaction exists in their transaction lists (see Lines 3–18).
Algorithm 4 [Coordinator] Classifying Transactions
  • Inputs:
     T x s = { T x s 0 , T x s 1 , . . . T x s n }
2:
Initialize:
     L i s t c o m m { } , L i s t t r { } , L i s t a g g { }
3:
fori=0, i N ( N o d e ) , i++ do
4:
    for j=0, j N ( T x s i ) , j++ do
5:
         i s C o m m o n t r u e
6:
        for k=0, k N ( N o d e ) & i k , k++ do
7:
           if T x j does not exist in T x s k then
8:
                i s C o m m o n f a l s e
9:
               break
10:
           end if
11:
        end for
12:
        if i s C o m m o n == t r u e then
13:
            L i s t c o m m T x j
14:
        else
15:
            L i s t t r T x j
16:
        end if
17:
    end for
18:
end for
19:
L i s t a g g request N o d e p r i m e to execute a Consensus Algorithm( L i s t t r )
(2) Executing a Consensus Algorithm: For trouble transactions, L i s t t r from the previous sub-step, the coordinator requests the prime node to execute a consensus algorithm and obtains a list of agreed transactions L i s t a g g from the prime node, denoted on Line 19 Algorithm 4. It should be noted that any BFT-based consensus algorithms can be applied in this step. All transactions in L i s t t r cannot be contained in L i s t a g g , because some of the transactions might not be completed in the consensus algorithm. At that time, transactions not agreed upon are removed according to the BFT-based consensus algorithm.
(3) Sorting All Transactions: Based on common transactions L i s t c o m m and agreed transactions L i s t a g g , the coordinator merges and sorts them in time order in this step. Then, the coordinator requests the prime node to confirm merged transactions similar to the process of equal transactions (see Algorithm 5, Line 5). The s i g p r i m e is produced by the prime node using S i g n a t u r e ( S o r t e d L i s t c s n ) s k p r i m e . If transactions are confirmed, the coordinator requests all nodes to generate a new block with the S o r t e d L i s t c s n , and all nodes’ t i m e p is updated by the current time t i m e c .
Algorithm 5 [Coordinator] Sorting the Merged Transactions
  • Inputs:
     L i s t c o m m , L i s t a g g
2:
Initialize:
     L i s t c s n { } , S o r t e d L i s t c s n { }
3:
L i s t c s n ← ( L i s t c o m m L i s t a g g )
4:
S o r t e d L i s t c s n ← sort ( L i s t c s n )
5:
i s C o n f i r m e d , s i g p r i m e , p k p r i m e request N o d e p r i m e to confirm S o r t e d L i s t c s n
6:
if  i s C o n f i r m e d = = t r u e then
7:
    request All Nodes to generate a new block with S o r t e d L i s t c s n and s i g p r i m e
8:
     t i m e p t i m e c
9:
else
10:
    Terminate
11:
end if

3.4. Step 4. Generating a New Block

The last step is to generate a new block that is relied on blockchain platforms. The coordinator does not intervene in this final step. Thus, all nodes create a new block with transferred transactions and a previous block’s hash. The controller, for requesting the BFT-Module to generate a new block and accessing transaction pools, is developed for each blockchain platform and our approach can apply to diverse BFT-based consensus algorithms.

4. Evaluation

This section describes our experiments’ results designed to evaluate our approach. For this evaluation, we established the three research questions below and carried out three experiments in response to research questions.
  • RQ1: How much can the scalability of PBFT be increased through our proposed approach?
  • RQ2: What is the correlation between the trouble transactions and performance?
  • RQ3: How much can our approach improve the scalability of IBFT of Hyperledger Besu?

4.1. RQ1: How Much Can the Scalability of PBFT Be Increased through Our Proposed Approach?

This first research question is intended to figure out how much our suggested approach achieves our research aim which is to improve the scalability of the BFT-based consensus algorithm. We selected and implemented the PBFT consensus algorithm for this research question, which is the most popular BFT-based consensus algorithm. Then, we measured the performance of the PBFT equipped with and without our approach. Furthermore, we increased the number of nodes to figure out the scalability of our approach.
Experimental setting for RQ1. To respond to RQ1, we built the PBFT network (Our source code for implementing the PBFT network is available at https://github.com/jungwonrs/JwRalph_Seo/tree/master/lab/Agent_Consensus) based on the Castro and Liskov’s research [13,14]. Initially, we structured four nodes and issued transactions every 10 ms for 10,000,000 (ms), so that we transmitted one million transactions for 2 h 40 min. In addition, we set a block generation time to be 10 ms, which implies that each node generates a new block every 10 s with transactions in their transaction pools (by using the sensitivity analysis, we obtained 10 s as the best block generation time for the best performance.). Then, we measured the total elapsed time until the consensus process of transactions is complete and computed an average elapsed time. We prepared 81 physical computers and deployed 80 PBFT nodes and one consensus coordinator into each computer. The hardware specification of each computer was Intel i5-3570 3.4 GHz CPU with 4GB RAM, and Windows 10 OS installed.
Experimental Result for RQ1. Figure 4 shows the result of the experiment. In the initial experiment with four nodes, the PBFT with the consensus coordinator expressed in CC + PBFT achieved 0.0328 s for each transaction on average, while PBFT without our approach denoted as PBFT showed 0.1237 s. The gap of the elapsed time of two approaches grew as the number of nodes increased. When the number of nodes reached 80, the elapsed time of PBFT and CC + PBFT became 1.1191 and 6.2212 s, respectively. Thus, the PBFT equipped with our approach obtained 3.77 times (=0.1237/0.0328) higher performance than PBFT with the initial four nodes. The performance gain was increased so that the PBFT with our approach achieved 5.56 (=6.2212/1.1191) times higher performance with 80 nodes. In all experiments throughout node changes, the performance of the PBFT with our approach increased an average of 4.75 times compared to the use of the PBFT alone. Therefore, we can recognize that our approach contributed to improving the performance of the PBFT consensus algorithm.
In addition to this, we observed that the elapsed time increase rate of the PBFT is bigger than that of CC + PBFT. While the increase rate of the PBFT from 4 to 80 nodes is 50.29 times (=6.2212/0.1237), that of the CC + PBFT is 34.12 (=1.1191/0.0328). It implies that our approach contributes to improving PBFT consensus algorithm’s scalability depending on the increase of nodes compared to only using the PBFT. The reason for the increase of the elapsed time of PBFT is because all nodes in the PBFT algorithm should participate in consensus process and the number of communications. In addition, the consensus process should always be executed for all transactions (i.e., one million transactions). However, our approach checks the equality of transactions and executes the consensus process only for trouble transactions. Thus, depending on the proportion of trouble transactions associated with the number of the nodes, the elapsed time of CC + PBFT is increased but not as steeply as that of PBFT.

4.2. RQ2: What Is the Correlation between the Trouble Transactions and Performance?

The second research question is for finding out how much our approach can contribute to the performance improvement of the BFT-based consensus algorithm. In the real-world, all blockchain nodes issue transactions, respectively, and it may rarely happen that all transactions in a transaction pool are equal. According to Donet and Pérez-Solà’s experiment [32], transaction propagation time in the Bitcoin network composed of 344 nodes took 35 min on average, which means that transactions in each node’s transaction pool are commonly different. In this research question, we build an environment where each node has many trouble transactions as in the real-world by controlling the interval of the transaction issue time and computed correlation between the elapsed time and the proportion of trouble transactions as τ .
Experimental setting for RQ2. To simulate the environment, we started with the experimental setting for RQ1 with the same hardware specification and issued one million transactions by controlling interval of transaction issue time from every 15 to 1 ms. Then, we measured a total elapsed time and obtained an average elapsed time for each transaction by dividing the total elapsed time by the number of transactions, as shown in Table 1. In addition to this, we observed the proportion of the trouble transaction τ in the consensus coordinator to obtain the correlation between trouble transactions and the elapsed time.
Experimental results for RQ2. Table 1 shows the result of the experiment. In the table, Columns 15 ms, 10 ms, 5 ms, 2 ms and 1 ms denote the time interval of the transaction issue and we only selected some of the representative time intervals. We computed τ by averaging the proportion of trouble transactions in a transaction pool collected from each node for every t i m e p t i m e c period (i.e., 10 s). Based on the τ , we highlight cells with the same colors associated with its τ value (see Color Legend in the table). When the number of nodes is 16, and a transaction is issued every 15 ms, our approach obtained 0.099 s for each of the average time interval (see the italic in the table). Depending on decreasing the transaction issue time interval, the average elapsed time and τ were increased. In addition, an increase in the number of nodes causes an increase in the average elapsed time and τ .
From the result, we established the correlation between τ and the average elapsed time for a transaction A v g . E l a p . T i m e t x based on the transaction issue time interval t i t and the number of nodes N ( N o d e ) as Equation (1). The equation implies that the average elapsed time is δ times proportional to the proportion of the trouble transaction τ . In addition, τ is inversely proportional to the transaction issue interval t i t and it has logarithmic relation with the number of nodes N ( N o d e ) . In the experiment, we obtained δ = 2.5 , α = 2 , and β = 0.1 , which indicates that the proportion of the trouble transaction strongly affects the average elapsed time for each transaction.
A v g . E l a p . T i m e t x δ τ α 1 t i t + β l o g ( N ( N o d e ) )

4.3. RQ3: How Much Can Our Approach Improve the Scalability of IBFT of Hyperledger Besu?

We established the third research question regarding the applicability of our approach to the real-world open-source blockchain framework using another BFT-based consensus algorithm. We selected Hyperledger Besu, a popular implementation of Ethereum client that supports public and private blockchain. It uses IBFT (Istanbul Byzantine Fault Tolerance) that enhances the performance by decreasing the number of nodes for transaction confirmation from 3 f + 1 to 2 f + 1 (IBFT https://github.com/ethereum/EIPs/issues/650). In the experiment for RQ3, we modified the Hyperledger Besu source code to communicate with our Controller for requesting new block generation and accessing transaction pool (our source code for implementing the BFT network is available at https://:github.com/jungwonrs/JwRalph_Seo/tree/master/lab/besu_backup). Then, we performed an experiment similar to that of RQ1 to figure out how much our approach can improve the IBFT consensus protocol’s performance.
Experimental setting of RQ3. We modified Hyperledger Besu 1.5.1 (https://github.com/hyperledger/besu/tree/1.5.1) to communicate with our consensus coordinator. For the experiment, we transmitted random transactions to Hyperledger Besu on a regular basis during the designated period. Initially, the number of nodes was four and we gradually increased the number of nodes until it reached 40. Then, we measured the number of transactions contained in generated blocks to recognize its throughput. We set the block generation time of the Hyperledger Besu to 10 s, and our coordinator execution interval was also set into 10 s because this time showed the best performance. The experiment was also carried out every 5 min (=300 s and 30 block generations) for each node configuration.
In the experiment, the interval of the transaction transmission started from 10 ms, because a significant number of transactions was missed in Hyperledger Besu in the case of under 10-ms interval. We carried out this experiment with the transaction transmission interval of 5 ms from 10 to 40 ms. Our hardware specification as a blockchain node and consensus coordinator was Intel i7-8700, 3.2 GHz CPU with 24 GB RAM and Windows 10 OS. All nodes and the consensus coordinator were executed on one computer. Due to the hardware specification limitation, the maximum node number of Hyperledger Besu was set to 40. In addition, the gas limitation of Hyperledger Besu was removed to generate transactions continuously (we refer to the method shown on the official Besu website: https://besu.hyperledger.org/en/stable/HowTo/Configure/FreeGas/).
Experimental results of RQ3. Figure 5 shows the result of the experiment where its transmission interval is 10 ms with from 4 to 40 blockchain nodes. In the figure, results of only using the IBFT and IBFT equipped with our approach are expressed in IBFT and CC + IBFT, respectively. The y-axis indicates the number of transactions contained in the generated blocks for 5 min. For four blockchain nodes, the number of transactions of IBFT and CC + IBFT contained in 30 generated blocks were 24,173 and 26,152, respectively. CC + IBFT achieves 8.19% (=(26,152 − 4173)/24,173) performance improvement. The total number of transactions that can be issued every 10 ms for 5 min is 30,000, but 3848 (=30,000 − 26,152) and 5827 (=30,000 − 24,173) are missed due to the performance limitation of Hyperledger Besu and our approach. For the experiment with 40 blockchain nodes, IBFT processed 1563 transactions, while CC + IBFT processed 6607 transactions, indicating a 322.71% (=(6607 − 1563)/1563) performance improvement. Thus, in all node configurations with 10 ms of transaction issue interval, the combination of IBFT and our consensus coordinator obtained 37.75% (=(154,782 − 112,361)/11,2361) performance improvement on average.
Figure 6 shows the result of the experiment where the interval is 25 ms. As the transaction time interval is 25 ms, the total number of transactions that can be issued for 5 min is 12,000 (=300/0.025), which is the maximum number of transactions that can be contained in generated blocks. In four blockchain nodes, the numbers of IBFT and CC + IBFT are 11,900 and 12,000, respectively, in which most of the issued transactions are contained in generated blocks. Thus, the gap between the two numbers of transactions is not big. However, for 20 nodes, CC + IBFT achieved 53.25% (=(11,255 − 7344)/7344) performance improvement. In addition, CC + IBFT obtained 344.81% (=(7584 − 1705)/1705) performance improvement in the case of 40 blockchain nodes, compared to only using the IBFT. In all node configurations, the combinational use of IBFT and our approach gained 61.81% (=((103,348 − 63,868)/63,868)) performance improvement on average. Thus, it is possible to conclude that our approach contributed to improving the performance of a specific blockchain node configuration and improving the IBFT consensus algorithm’s scalability because the loss of performance is smaller depending on the increase of the blockchain nodes. All datasets resulting from this experiment are presented in the Appendix A.
While carrying out this experiment, we observed that the proportion of equal and unequal transactions in consensus coordinator, as shown in Table 2. As the coordinator execution interval is 10 s, our approach’s maximum number of execution is 30 for 5 min. Then, we counted the number of cases that carry out equal transactions, that is, the case of Step 3.1 Handling Equal Transactions and the number of unequal transaction cases that process Step 3.2 Handling Unequal Transactions. Each of them is expressed in Equal Txs and Unequal Txs in the table. In four nodes with 10 and 25 ms, all transactions of the transaction pool of all nodes are equal, which are 93.33% and 96.67% of 30 coordinator executions, respectively. However, the number of nodes is increased, the higher proportion of the case for equal transactions is decreased, and that of unequal transactions is increased. As a result, average equal and unequal transactions are 55.33% and 44.67% in the case of 10-ms transaction issue interval, while those of the 25-ms case are 72% and 28%, respectively. Thus, we recognized that our contribution to the performance and scalability improvement is positively associated with the proportion of equal transactions, as pointed by Equation (1).

4.4. Threats to Validity

Construct Validity. The results of RQ1, RQ2, and RQ3 may be influenced by hardware specification and the version of Hyperledger Besu. Although there are diverse factors related to performance, the experimental results, such as elapsed times and the number of transactions, can differ. However, we tried to carry out our experiments on the same hardware specification for control and experimental groups, so that the relative comparison of the result is considered desirable for our experiment. In addition, the result of the experiment for RQ3 can be different due to the version of Hyperledger Besu. We selected the 1.5.1 version of Hyperledger Besu in the experiment for RQ3, which was the most recent version at the time. However, the version upgrade is frequent so that the use of different versions would show different results.
Content Validity. In this paper, the concept of scalability is measured by the extent of the decrease in performance, depending on the increase in the number of nodes. Based on this definition, we keep measuring the performance gap, depending on node changes. Besides, we also defined that the term transaction issue indicates that a client issues one transaction, and the transaction is contained in a new block through block generation. However, the block generation interval was set to be every 10 s in the experiments for RQ1 and RQ3, while the unit of the measure of the elapsed time must be 10 s, which are not the exact time. Due to this issue, we performed our experiment for 2 h 40 min, which is a long time enough to ignore the 10 s time gap for measuring the average elapsed time for processing a transaction. In the experiment for RQ3, we fixed the experiment time into 30 block generations (i.e., 5 min) to resolve the issue.
Internal Validity. Experiment results may be affected by different sets of the interval execution of the consensus coordinator (i.e., t i m e p t i m e c ) and the block generation time for PBFT and IBFT in Hyperledger Besu. To handle this issue, we performed the sensitivity analysis and observed that setting the execution time interval of the coordinator and block generation time of PBFT and IBFT to 10 ms showed the best performance. However, the performance may be different depending on the interval setting.
External Validity. In this paper, we claim that our approach is efficient for BFT-based consensus algorithms, and we applied our approach to two consensus algorithms: PBFT and IBFT. It is hard to claim that our approach applies to all BFT-based consensus algorithms. However, we selected the most popular BFT-based consensus algorithm, PBFT. Many consensus algorithms such as IBFT, Zyzzyva [18], SBFT [19], Hotstuff [33,34], and Tendermint [26] are derived from PBFT. Although we selected IBFT as a representative derivation of PBFT, we argue that our approach can be applied to other BFT-based consensus algorithms.

5. Conclusions

This paper proposes a coordination technique for improving the scalability of BFT-based consensus algorithms. The technique is composed of four steps: (1) electing a prime node; (2) collecting transactions from transaction pools; (3) processing equal and unequal transactions; and (4) generating a new block. Our key idea is to control a conditional execution of the consensus algorithm by dividing the transaction pool into equal and unequal transactions and secondly dividing common and trouble transactions among unequal transactions. The consensus algorithm is then executed only for trouble transactions, and the results are merged and finalized through sharing the transactions throughout all blockchain nodes.
Based on this approach, we carried out three experiments to respond to three research questions. As a result of the experiments, the use of PBFT equipped with our approach showed 4.75 times the performance improvement on average compared to using PBFT only. In addition, our approach contributed to improving the performance by a maximum of 61.81% of the performance, compared to the single-use of IBFT. In addition to this, we showed the correlation of performance and trouble transactions associated with the transaction issue interval and the number of blockchain nodes.
Although our approach showed the scalability improvement of BFT-based consensus algorithms, it has explicit limitations. First, the consensus coordinator is centralized so that it exposes the coordinator to the single point failure issue. Second, our approach does not address the recovery issue of the coordinator when a system has failed or restarted. Third, our approach should be tested in the real-world environment where diverse synchronization issues exist such as clock synchronizations throughout distributed nodes. For future work, we plan to carry out more research on distributing the centralized consensus coordinator and establishing recovery strategy from the system failure in the real-world environment.

Author Contributions

Conceptualization, J.S. and D.K.; Funding acquisition, S.P.; Investigation, J.S.; Methodology, S.K.; Supervision, S.P.; Writing—original draft, J.S., D.K. and S.P.; and Writing—review and editing, S.K. All authors have read and agreed to the published version of the manuscript.

Funding

This research was supported by the MSIT(Ministry of Science and ICT), Korea, under the ITRC (Information Technology Research Center) support program (IITP-2020-2017-0-01628) supervised by the IITP (Institute for Information & communications Technology Promotion).

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A. Raw Data of RQ3

NodeCount10 ms25 msNodeCount10 ms25 ms
IBFTCC + IBFTIBFTCC + IBFTIBFTCC + IBFTIBFTCC + IBFT
4184688068440181766766623421
27839162594762566566155421
35888763824003685685320401
47679163884014587776309376
56929163914765684684339401
68179163874016678780335376
77459163914017661701339391
88039263924018601756345401
98099363924019648766340397
1080392639237610655756348401
1180991639237611623757341397
1283391639247612659659344401
1381591639240113612759340401
1487693939137614633777352397
1578794139337615657757348401
1679779739237616627775352401
1787587539140117652787349397
1884084039237618649777348401
1980380339239719662662353401
2087387339337620615764349397
2179579539240121679679349401
2287487439337622640776352401
2376176139240123636765350411
2485185139337624659659351401
2583483439240125637779353401
2680580539137626641840351401
2783283239240127652797351401
2880280239240128654665350401
2983383339340129647776349401
3082582539240130653817312401
sum24,17326,15211,90012,000 sum19,41822,26310,39712,000
NodeCount10 ms25 msNodeCount10 ms25 ms
IBFTCC + IBFTIBFTCC + IBFTIBFTCC + IBFTIBFTCC + IBFT
121530656504391161532176261387
25866642924012532756261387
35566562964013476616261401
45766242964014477626276387
55566732764015571627276387
65766762964016571627277387
75466772764017571628277411
85766562764018476633277387
95466562963969476626277387
1057665627640110532573261387
1157667727640111489626287377
1257767828639112477676277387
1357665627640113485626261387
1457667427640114497176261387
1557866329240115556327261387
1657666229238616476626261387
1752165629241117532682261416
1857665629340118497626260387
1957665830240119497600260387
2057665830240120532631260397
2157665629440121476618261387
2254665629440122571614261376
2357665629440123476614261387
2452665629440124532627261411
2552665629440125574598261387
2653065629240126571608277387
2757765629440127490628261387
2857665629440128478626261387
2957665529440129571626287387
3057667329440130483626261387
sum16,91719,808890912,000 sum15,47417,669800511,690
NodeCount10 ms25 msNodeCount10 ms25 ms
IBFTCC + IBFTIBFTCC + IBFTIBFTCC + IBFTIBFTCC + IBFT
201463536241371241296526171346
24745762413712296516171346
34625632413713327516176346
44645762773714327576177346
54515632314165327486177336
63895762313716356488177346
74225762313567327508177341
84345762413718327516171346
94735752413719367518171346
1047457024137110327519171344
1143257624137611327517177346
1246457624138612327517178346
1346457224137113284516177344
1445455624138614327519178346
1546257627737115256516176346
1646057624137116327516177346
1745955624137117292486164344
1845654624140118327487166344
1945457624137119300476171346
2045257627737120299498139346
2142757624137121309499144351
2245452624141622327516144348
2343457627737123327516171348
2445655624137124325516177346
2545655624135625238516177358
2645157624137126362487177358
2745145624137127304486177358
2844157624137128332486176346
2944157624137129327516176346
3044155624137130327516177346
sum13,51516,903734411,255 sum952115,271513810,402
NodeCount10 ms25 msNodeCount10 ms25 ms
IBFTCC + IBFTIBFTCC + IBFTIBFTCC + IBFTIBFTCC + IBFT
28153972803193211535711276
2196416136326214936031296
319688136331311935115276
4200260256331414035117294
5201416166326511534239298
6196452168326610534155296
7167474167296714833660296
8201416136296810331677296
91963301363269174316111294
101674091303261013632180287
1119641613032611135341116296
1220137713630112144342246296
13200363136299131483514287
1420641628030914129316124295
151963971413261514731877296
161964166332616147357134296
1722439725632617106362137293
1823739725630918147363105296
191964168430919129364125291
2019637417731420132377126296
2119637422032621149358103296
2216741614132622147326126296
2319641614132623133337130291
241964161413262414929795296
2519946313632625146376136296
2616741617933826142354137296
2719641613634327147351124363
2821240117632628131346145296
2919641613432629128341118291
301963751613263014635682296
sum569411,73649359637 sum398610,32428868868
NodeCount10 ms25 msNodeCount10 ms25 ms
IBFTCC + IBFTIBFTCC + IBFTIBFTCC + IBFTIBFTCC + IBFT
361112771427140142410261
289288192812163260219246
311271142773242622265
491272111274422524214276
576166116256502590271
6832778825660267121246
789281525678927632246
88127710627380264188271
984277312729425618271
107488108270107925688271
118531813125711382570246
1203261525912402588249
1373326106256137925911252
14763381332561438186176246
159130710726115817629253
168227810726116517678253
1781307108256177917911246
18843084261188180107246
19431013326019441812246
208427711126220502511254
218427711125621200241139246
228927711127322792160249
2381178135256230832246
2481198962562479854248
25017787271255216180246
2681184862712679927246
278527813325627602512247
2889302106271280243126246
29873031112712979240140249
30743061062563052540246
sum2100804926497912 sum1563660717057584

References

  1. Swan, M. BlockChain BluePrint for a New Economy, 1st ed.; O’Reilly Media: Sebastopol, CA, USA, 1978; ISBN 978-149-192-049-7. [Google Scholar]
  2. Cong, L.W.; He, Z. Blockchain Disruption and Smart Contracts. Rev. Financ. Stud. 2019, 32, 1754–1797. [Google Scholar] [CrossRef]
  3. Catalini, C.; Gans, J.S. Some Simple Economics of The Blockchain. Commun. ACM 2019, 63, 80–90. [Google Scholar] [CrossRef]
  4. Banerjee, M.; Lee, J.H.; Choo, K.K.R. A blockchain future for internet of things security: A position paper. Digit. Commun. Netw. 2018, 4, 149–160. [Google Scholar] [CrossRef]
  5. Azaria, A.; Ekblaw, A.; Lippman, A. MedRec: Using blockchain for Medial Data Access and Permission Management. In Proceedings of the 2016 2nd International Conference on Open and Big Data(OBD), Vienna, Austria, 22–24 August 2016; pp. 25–30. [Google Scholar]
  6. Korpela, K.; Hallikas, J.; Dahlberg, T. Digital Supply Chain Transformation toward Blockchain Integration. In Proceedings of the 50th Hawaii International Conference on System Sciences, Hawaii, HI, USA, 4 January 2017; pp. 4182–4191. [Google Scholar]
  7. Pelz-Sharpe. Available online: https://www.deep-analysis.net/wp-content/uploads/2019/08/DA-190812-Ent-Blockchain-forecast.pdf (accessed on 20 December 2019).
  8. Kim, S.; Kwon, Y.; Cho, S. A Survey of Scalability Solution on Blockchain. In Proceedings of the 2018 International Conference on Information and Communication Technology Convergence (ICTC), Jeju, Korea, 17–19 October 2018; pp. 1204–1207. [Google Scholar]
  9. Chauhan, A.; Malviya, O.P.; Verma, M.; Singh, T.M. Blockchain and Scalability. In Proceedings of the 2018 IEEE International Conference on Software Quality, Reliability and Security Companion (QRS-C), Lisbon, Portugal, 16–20 July 2018; pp. 122–128. [Google Scholar]
  10. Scherer, M. Performance and Scalability of Blockchain Networks and Smart Contract. Available online: http://www.diva-portal.org/smash/record.jsf?pid=diva2:1111497 (accessed on 1 March 2020).
  11. Zheng, Z.; Xie, S.; Dai, H.N.; Chen, X. Blockchain Challenges and Opportunities: A Survey. Int. J. Web Grid Serv. 2018, 14, 352–375. [Google Scholar] [CrossRef]
  12. Cachin, C.; Vukolic, M. Blockchains Consensus Protocols in the Wild. Available online: https://arxiv.org/abs/1707.01873 (accessed on 21 December 2019).
  13. Castro, M.; Liskov, R. Practical Byzantine Fault Tolerance. In Proceedings of the Third Symposium on Operating Systems Design and Implementation, New Orleans, LA, USA, 22–25 February 1999; pp. 173–186. [Google Scholar]
  14. Castro, M.; Liskov, R. Practical Byzantine Fault Tolerance and Proactive recovery. ACM Trans. Comput. Syst. 2002, 20, 398–461. [Google Scholar] [CrossRef]
  15. Sukhwani, H.; Martínez, J.M.; Chang, X.; Trivedi, K.S.; Rindos, A. Performance Modeling of PBFT Consensus Process for Permissioned Blockchain Network (Hyperledger Fabric). In Proceedings of the 2017 IEEE 36th Symposium on Reliable Distributed Systems (SRDS), Hong Kong, China, 26–29 September 2017; pp. 253–255. [Google Scholar]
  16. Feng, L.; Zhang, H.; Chen, Y.; Lou, L. Scalable Dynamic Multi-Agent Practical Byzantine Fault-Tolerant Consensus in Permissioned Blockchain. Appl. Sci. 2018, 8, 1919. [Google Scholar] [CrossRef] [Green Version]
  17. Luu, L.; Narayanan, V.; Baweja, K.; Zheng, C.; Gilbert, S.; Saxena, P. SCP: A Computationally-Scalable Byzantine Consensus Protocol For Blockchains. Available online: https://eprint.iacr.org/2015/1168/20160823:024020 (accessed on 15 March 2020).
  18. Kotla, R.; Alvisi, L.; Dahlin, M. Zyzzyva: Speculative Byzantine Fault Tolerance. ACM Trans. Comput. Syst. 2010, 27, 45–58. [Google Scholar] [CrossRef]
  19. Gueta, G.G.; Abraham, I.; Grossman, S. SBFT: A Scalable and Decentralized Trust Infrastructure. In Proceedings of the 2019 49th Annual IEEE/IFIP International Conference on Dependable Systems and Networks (DSN), Portland, OR, USA, 24–27 June 2019; pp. 568–580. [Google Scholar]
  20. Jiang, Y.; Lian, Z. High Performance and Scalable Byzantine Fault Tolerance. In Proceedings of the 2019 IEEE 3rd Information Technology, Networking, Electronic and Automation Control Conference (ITNEC), Chengdu, China, 15–17 March 2019; pp. 1195–1202. [Google Scholar]
  21. Lei, K.; Zhang, Q.; Xu, L.; Qi, Z. Reputation-Based Byzantine Fault-Tolerance for Consoritum Blockchain. In Proceedings of the 2018 IEEE 24th International Conference on Parallel and Distributed System, Singapore, 11–13 December 2018; pp. 604–611. [Google Scholar]
  22. Gao, S.; Yu, T.; Zhu, J.; Cai, W. T-PBFT: An Eigen Trust-based practical Byzantine fault tolerance consensus algorithm. China Commun. 2019, 16. [Google Scholar] [CrossRef]
  23. Liu, J.; Li, W.; Karame, G.O.; Asokan, N. Scalable Byzantine Consensus via Hardware-Assisted Secret Sharing. IEEE Trans. Comput. 2019, 68. [Google Scholar] [CrossRef] [Green Version]
  24. Lamport, L.; Shostak, R.; Pease, M. The Byzantine Generals Problem. ACM Trans. Program. Lang. Syst. 1982, 4, 382–401. [Google Scholar] [CrossRef] [Green Version]
  25. Sousa, J.; Bessani, A.; Vukolic, M. A Byzantine Fault-Tolerant Ordering Service for the Hyperledger Fabric Blockchain Platform. In Proceedings of the 2018 48th Annual IEEE/IFIP International Conference on Dependable Systems and Networks (DSN), Luxembourg, 25–28 June 2018; pp. 51–58. [Google Scholar]
  26. Kwon, J. Tendermint: Consensus without Mining. Available online: https://tendermint.com/docs/tendermint.pdf (accessed on 20 May 2020).
  27. Amsden, Z.; Arora, R.; Bano, S.; Baudge, M.; Blackshear, S.; Bothra, A.; Cabrera, G.; Catalini, C.; Chalkias, K.; Cheng, E.; et al. The Libra Blockchain. Available online: https://developers.libra.org/docs/the-libra-blockchain-paper (accessed on 2 July 2020).
  28. Hyperledger Besu 1.5 Performance Enhancement. Available online: https://www.hyperledger.org/category/hyperledger-besu (accessed on 1 September 2020).
  29. Fan, K.; Sun, S.; Yan, Z.; Pan, Q.; Li, H.; Yang, Y. A blockchain-based clock synchronization Scheme in IoT. Future Gener. Comput. Syst. 2019, 101. [Google Scholar] [CrossRef]
  30. Bertasi, P.; Bonazza, M.; Moretti, N.; Peserico, E. PariSync: Clock synchronization in P2P networks. In Proceedings of the 2009 International Symposium on Precision Clock Synchronization for Measurement, Control and Communication, Brescia, Italy, 12–16 October 2009; pp. 1–6. [Google Scholar]
  31. Iwanicki, K.; van Steen, M.; Voulgaris, S. Gossip-Based Clock Synchronization for Large Decentralized Systems. Self-Manag. Netw. Syst. Serv. 2006, 3996. [Google Scholar] [CrossRef]
  32. Donet Donet, J.A.; Pérez-Solà, C. The Bitcoin P2P Network. In Proceedings of the Financial Cryptography and Data Security FC2014, Christ Church, Barbados, 7 March 2014; pp. 87–102. [Google Scholar]
  33. Yin, M.; Malkhi, D.; Reiter, M.K.; Gueta, G.G.; Abraham, I. HotStuff:BFT Consensus with Linearity and Responsiveness. In Proceedings of the 2019 ACM Symposium on Principles of Distributed Computing, Toronto, ON, Canada, 29 July–2 August 2019; pp. 347–356. [Google Scholar]
  34. Yin, M.; Malkhi, D.; Reiter, M.K.; Gueta, G.G.; Abraham, I. HotStuff:BFT Consensus in the Lens of Blockchain. Available online: https://arxiv.org/abs/1803.05069 (accessed on 30 May 2020).
Figure 1. The consensus process of the PBFT consensus algorithm.
Figure 1. The consensus process of the PBFT consensus algorithm.
Applsci 10 07609 g001
Figure 2. Overview of the coordination technique for scalable BFT-based consensus.
Figure 2. Overview of the coordination technique for scalable BFT-based consensus.
Applsci 10 07609 g002
Figure 3. Steps for processing collected transactions.
Figure 3. Steps for processing collected transactions.
Applsci 10 07609 g003
Figure 4. Elapsed time comparison between PBFT and PBFT equipped with the Consensus Coordinator (CC).
Figure 4. Elapsed time comparison between PBFT and PBFT equipped with the Consensus Coordinator (CC).
Applsci 10 07609 g004
Figure 5. Comparing the amount of transactions between IBFT and CC + IBFT when the transaction generation interval is 10 ms.
Figure 5. Comparing the amount of transactions between IBFT and CC + IBFT when the transaction generation interval is 10 ms.
Applsci 10 07609 g005
Figure 6. Comparing the amount of transactions between IBFT and CC + IBFT when the transaction generation interval is 25 ms.
Figure 6. Comparing the amount of transactions between IBFT and CC + IBFT when the transaction generation interval is 25 ms.
Applsci 10 07609 g006
Table 1. The average elapsed time per transaction and the proportion of trouble transactions ( τ ).
Table 1. The average elapsed time per transaction and the proportion of trouble transactions ( τ ).
N(Node)15 ms10 ms5 ms2 ms1 ms
160.0990.1200.2440.3300.406
280.1860.2530.3160.4050.521
400.2320.4380.5330.7530.835
520.4620.8131.1001.3081.672
640.7141.0001.3551.9152.511
760.7131.8052.1062.7813.155
Color Legend τ = 0.1 x τ = 0.2 x τ = 0.3 x τ = 0.4 x τ > 0.5 x
Table 2. Result of monitoring equal and unequal transactions of consensus coordinator.
Table 2. Result of monitoring equal and unequal transactions of consensus coordinator.
10 ms25 ms
N(Node)Equal TxsUnequal TxsEqualityInequality
42893.33%26.67%2996.67%13.33%
82790%310%30100%00%
122583.33%516.67%2996.67%13.33%
162066.67%1033.33%2686.67%413.33%
201963.33%1136.67%2376.67%723.33%
241653.33%1446.67%2066.67%1033.33%
281446.67%1653.33%1653.33%1446.67%
321033.33%2066.67%1796.67%1343.33%
36413.33%2686.67%1446.67%1653.33%
40310.00%2790.00%1240%1860%
Average16655.33%13444.67%21672%8428%
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Seo, J.; Ko, D.; Kim, S.; Park, S. A Coordination Technique for Improving Scalability of Byzantine Fault-Tolerant Consensus. Appl. Sci. 2020, 10, 7609. https://doi.org/10.3390/app10217609

AMA Style

Seo J, Ko D, Kim S, Park S. A Coordination Technique for Improving Scalability of Byzantine Fault-Tolerant Consensus. Applied Sciences. 2020; 10(21):7609. https://doi.org/10.3390/app10217609

Chicago/Turabian Style

Seo, Jungwon, Deokyoon Ko, Suntae Kim, and Sooyong Park. 2020. "A Coordination Technique for Improving Scalability of Byzantine Fault-Tolerant Consensus" Applied Sciences 10, no. 21: 7609. https://doi.org/10.3390/app10217609

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop