Next Article in Journal
Redesign of Channel Codes for Joint Source-Channel Coding Systems over One-Dimensional Inter-Symbol-Interference Magnetic Recording Channels
Next Article in Special Issue
A Multi-Modal Story Generation Framework with AI-Driven Storyline Guidance
Previous Article in Journal
Ultra-Broadband Angular-Stable Reflective Linear to Cross Polarization Converter
Previous Article in Special Issue
Task-Space Cooperative Tracking Control for Networked Uncalibrated Multiple Euler–Lagrange Systems
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Deletion-Based Tangle Architecture for Edge Computing

by
Khikmatullo Tulkinbekov
and
Deok-Hwan Kim
*
Electrical and Computer Engineering Department, INHA University, Incheon 22212, Korea
*
Author to whom correspondence should be addressed.
Electronics 2022, 11(21), 3488; https://doi.org/10.3390/electronics11213488
Submission received: 30 September 2022 / Revised: 18 October 2022 / Accepted: 25 October 2022 / Published: 27 October 2022
(This article belongs to the Special Issue Real-Time Control of Embedded Systems)

Abstract

:
IOTA Tangle offers a promising approach for distributed ledger technology with the capability to compete with the traditional blockchain. To enable microtransactions the Internet of things (IoT) environment, IOTA employs a direct acrylic graph that ensures the integrity and immutability of the transactions. However, IoT data exhibit time sensitivity, wherein the value is lost after a period. Storing these temporary data for immutable storage would not be affordable in the distributed ledger. This study proposes a novel approach—referred to as D-Tangle—that enables data deletions in the Tangle architecture. To achieve this goal, D-Tangle divides transactions into three categories based on their expiration features and employs the climb-up writing technique. Extensive evaluations prove that D-Tangle enables instant deletions in finite lifetime data. Moreover, immutability and deletion upon request are guaranteed for unknown lifetime data.

1. Introduction

In recent years, blockchain technology has proven its effectiveness in terms of data security and integrity [1,2]. The significant success of cryptocurrencies is an optimal example of blockchain as a real-life application [3]. Blockchain offers a simple protocol that enables sharing of the same data to all nodes in the network. Whenever new data are written, the old state is extended using a new block. Therefore, whoever possesses the longest chain of blocks shares the latest version of the database. By following this simple rule, all nodes achieve a full copy of the entire blockchain [4]. If attackers try to alter some data from the blockchain, they also need to share the altered version with other nodes. As the network already has the original version, altered copies can be easily rejected [5]. Moreover, most blockchains use key-value stores [6] as the main storage to avoid database attacks. For this purpose, blockchain has been at the center of researchers’ interest for more than a decade. Following intensive studies, blockchain technology has successfully emerged in numerous fields, such as supply chain management [7,8,9], data privacy control over cloud computing [10,11], healthcare systems [12,13,14], and edge computing [15,16,17]. Although the integration of blockchain has benefited several fields, a significant technology mismatch—preventing the attainment of the full advantage—prevails. Large data processing systems, such as edge computing, cloud computing, and data collection frameworks, usually manage unpredictable data, wherein the majority is short-lived redundant data [18]. Storing this data in an immutable blockchain would cause efficiency problems. Therefore, the blockchain protocol must be altered to enable data deletions [19] and fulfill big data requirements. However, enabling data deletions from blockchain seriously violates the most basic rules of blockchain, which stands as the backbone of its security. Therefore, a large research gap exists in enabling blockchain for big data processing. To address this issue, researchers have already introduced several state-of-the-art projects enabling data deletions from blockchains [20,21,22]. Based on the observations of the proposed projects, categorizing blockchain data deletion techniques into blocks and selective deletions is possible.
A traditional blockchain uses blocks as basic units to store a set of transactions. The block contents are usually inseparable, owing to the hashing methods employed. Separating the arbitrary transaction from the corresponding block is difficult. However, deleting the data block-wise is easier. Thus, the whole block, including all transactions, is deleted from the database. Therefore, the main challenging aspect of this approach is carefully locating the transactions into blocks according to lifetime ordering. Moreover, blocks are saved in a linked order, wherein deleting the random block from the middle of the chain breaks the blockchain rule. Therefore, the easiest approach is locating the blocks in the order of lifetime: the first-to-delete block is saved last. As an alternative, researchers have been working on selective deletion techniques. Thus, the transactions have independent deletion rights despite the block being preserved. To make this possible, the blockchain protocol must be designed in a way that enables the recalculation of the block hash while maintaining the order. By definition, block deletion achieves simplicity in implementation. Further, selective deletions benefit in effectiveness. Considerable research has been conducted to enable data deletions on the blockchain using both approaches [23,24,25]. Although these projects have demonstrated promising results, there are limitations depending on the workload. However, the IoT data environment is dynamic, and the data may exhibit different types of behavior. The proposed systems face difficulties when employing deletion methods because they all share the same blockchain protocol. The blockchain data write procedure focuses on making the complicated data structure impossible to modify. Therefore, the purpose of the employed technique contradicts the main focus of blockchain, thereby challenging researchers. Therefore, deletion methods in distributed systems must be examined further with alternative approaches to make them suitable for the IoT environment.
In addition to traditional approaches such as Bitcoin [4] and Ethereum [26], numerous other alternatives that share general blockchain rules have also been introduced. IOTA [27] is among the most promising alternatives that enable micro-transactions. IOTA introduces the direct-acyclic graph (DAG)-based architecture [28] named Tangle, which saves transactions independently instead of employing blocks. Further, Tangle allows forest-like structures with multiple connections to the leaf nodes. Each transaction is expected to verify previous transactions. This prevents heavy computations when creating blocks and speeds up the verification process for each transaction. More importantly, each transaction is processed individually, making it easier to separate it from the blockchain. The advantages of IOTA motivated this research to introduce a novel and easy solution for enabling data deletions in immutable distributed storage. This study proposes a D-Tangle architecture with the capability of instant deletions on finite lifetime data. D-Tangle also considers cases of unknown and immutable lifetime data in a dynamic environment. In this case, deletions are guaranteed on request, and immutability is provided when the data behavior changes. D-Tangle inherits the main characteristics from IOTA and enables data deletions on the leaf transactions only. D-Tangle achieves the optimal performance of both deletion techniques employed in the blockchain. When using DAG, each node stands as a separate unit, such as a block, and deleting each node defines the deletion of single data. Therefore, D-Tangle achieves both simple and selective deletions. To enable instant deletions, a climb-up write technique is employed. In the new method, transactions are allowed to verify the old transactions only if the lifetime is longer than itself. Short-lifetime transactions are invariably written last, making it possible to delete them after expiration. Following the implementation of the climb-up write technique in the D-Tangle architecture, we make the following three main contributions:
  • Instant deletions on the finite lifetime data;
  • Guaranteed deletion of the unknown lifetime data based on the delete request;
  • Storage utilization on managing finite lifetime data.
This paper is organized as follows: Section 2 discusses related work. Section 3 lists the most relevant background projects and motivations for this research. Section 4 comprehensively discusses D-Tangle architecture. Section 5 proves the effectiveness of the proposed architecture through an extensive evaluation. Section 6 concludes the paper.

2. Related Work

As mentioned above, blockchain-based data deletions have become an interesting topic for research in recent years. Pyoung et al. [29]—introducing Littichain for finite-lifetime data with the capability of block deletion—employed an expire-time ordering technique to enable deletions on the expired blocks. To maintain blockchain consensus, the arrival time ordering is also preserved with the desired configuration. Although Littichain achieves a promising result, its capability is limited by block dependencies. This means that changing the workload to a more dynamic type creates unexpected delays in data deletion. Hillmann et al. [30]—introducing another interesting study with selective deletions—addressed the independent expiration time of each data point inside the blocks. As the block data are linked to using a complex hashing technique, separating the single data according to the expiration time would be impossible. Instead, they employed a summary block technique that recreates a new block that skips the expired data from older blocks and replaces them. Using the selective deletion method, they found an easy approach for complex problems but still could not avoid the delay in deletions. Summary blocks are created only in specified intervals, and the expired data between the intervals must wait until the process is triggered. As an alternative approach, Khanboubi et al. [20] introduced a data-deletion technique for blockchain-based data deduplication systems. Yang et al. [23] moved data deletion rights to the public in cloud data management using blockchain. Although they did not directly delete the data from the blockchain, they proved its role in implementing new features in big data management.
As an alternative to blockchain, Guo et al. [31] conducted a comprehensive analysis of the IOTA architecture using real-life data and categorized it using empirical data. They tested the IOTA architecture with real data for the first time and categorized its performance by data type. Bu et al. [32] improved the IOTA architecture by focusing on fairness in verification and improving confidence. Bhandary et al. [33] and Shabandri et al. [34] elucidated the security factors in detail and improved the IOTA reliability by introducing new security techniques. Gangwani et al. [35] employed secure transmission on mobile messaging protocols using IOTA. These state-of-the-art projects have proven the effectiveness of IOTA in IoT data environment and its advantages over traditional blockchain protocols. However, they are not related to data deletion in the Tangle architecture.
Observations from related studies have proven that data deletions have already become an emerging topic in distributed storage systems. As it is more distributed and freer of control, blockchain has been, thus far, in the interest of research. However, numerous limitations exist in the blockchain that prevent deletion-based protocols. The IOTA Tangle, however, provides a more straightforward approach for processing IoT data. Although data deletions have not been considered seriously in Tangle architecture thus far, it exhibits advantages in data processing that can benefit from new features. Therefore, this study takes a first step by adapting research from blockchain to DAG-based distributed systems.

3. Background and Motivations

Distributed computing has always been at the center of interest owing to its transparency. Allowing the public network to achieve consensus for data verifications prevents third-party participation in numerous real-life applications such as product management, money transactions, or authentication. However, managing distributed nodes is a significant challenge in enabling computations. A significant breakthrough has occurred in this field in recent years following the introduction of blockchains and direct acyclic graphs.

3.1. Blockchain

Among distributed computing systems, blockchain has become the most popular protocol, with endless possibilities and varied applications. Blockchain enables a public network wherein nodes follow a set of rules to achieve a consensus. Figure 1 presents the general overview of the blockchain architecture.
Traditionally, the main data units are cryptocurrency transactions [36,37,38,39]. As transactions are massively created and the blockchain relies on public consensus over the network, processing each transaction individually is challenging. Instead, the blockchain groups several transactions in the blocks and creates a linked chain of blocks. The blocks in the chain are ordered according to the creation time, and the latest block is written last. For security, blockchain relies on a P2P network [40] of nodes, as presented in the figure. All the nodes share the same blockchain copy. Maintaining the security and reliability of heterogeneous and dynamic nodes is a complicated process that creates delays between the creation of new blocks. As transactions are created in an unexpected manner, those created between two blocks are written in the data structure referred to as pending transactions. Depending on system requirements, blockchain architectures employ different types of mechanisms to create blocks. The most famous blockchains, such as Bitcoin and Ethereum, rely on a process termed as mining. Regarding creating blocks, when the interval arrives, the nodes read transactions from the pending transactions list and try solving the heavy mathematical puzzle to calculate the block hash. The puzzle is designed to force nodes to calculate the fixed order of the transaction and a unique hash for the block. In Bitcoin, the nodes may take about 10 min to solve the puzzle. The node that finishes solving the puzzle shares the results with the whole network and waits for confirmations. When other nodes receive a broadcast message, they check and verify the results and confirm whether the solution is true. After receiving confirmations from the majority number of nodes, the owner node further obtains the right to add a new block to the chain. When a new block is added, the new version of the chain is rebroadcasted to the network. For integrity, nodes follow the longest-chain rule [4], wherein the chain is accepted only if it is longer than the previous versions. If two or more nodes broadcast the new block simultaneously, the one with the longer chain wins the consensus and is verified by the network. Regarding the hard work and contributions to the blockchain, the owner node attains incentives in the form of cryptocurrency (Bitcoin). Moreover, all transaction fees from the new block are distributed to the network nodes as incentives for confirmation. As blockchain consensus is based on heavy computations and verifications, the entire process is also referred to as proof of work (PoW). Any arbitrary computer or server is allowed to join the public blockchain network, as long as it follows the PoW consensus. If malicious nodes try altering the consensus, rules, and transaction behavior, the updates are easily rejected by the network.

3.2. Direct Acyclic Graph

As blockchain consensus is based on a mining process with heavy computations, miners are attracted to join the network only to gain rewards. Further, transaction fees and demand for cryptocurrencies, simultaneously, increase significantly. This makes the most popular networks unaffordable by small transactions wherein the fee exceeds the sending amount. Moreover, blockchain transactions are highly dependent on block creation. Further, Bitcoin can only perform up to 10 transactions per second. Considering the current demand for technology, most users do not tolerate high fees at low speeds. To address these issues, IOTA [27] was introduced as an alternative distributed network. IOTA is not a blockchain, but is based on a direct-acyclic graph (DAG) [28]. A DAG is a type of forest structure that is used as a storage system. As the name suggests, each edge has a certain direction (directed), and a loop is not allowed among nodes (acyclic). IOTA introduces a DAG-based structure termed as Tangle. Compared to blockchain, IOTA does not mine blocks to confirm transactions but processes each transaction individually. In Tangle, each transaction is considered a node of the graph, and the edges specify the transaction verifications. Tangle employs two simple rules to achieve consensus.
  • Each new transaction must verify two old transactions.
  • Transactions are confirmed after getting verifications.
Following these rules, Figure 2a presents the sample Tangle graph. For a beginning, only genesis transaction exists, which serves as the first. Following rule 1, each transaction must verify two old transactions; otherwise, it will not be confirmed by the network. According to rule 2, transactions usually have only two confirmations. However, occasionally, more confirmations can arise. Regarding the blockchain, users usually wait for five to six confirmation blocks to ensure the security of the transaction. In the case of Tangle, each transaction is assigned a certain number termed as weight (w). Whenever a new transaction is saved, it affects the cumulative weight of verifying the old transactions and their parent transactions. As presented in the figure, all transactions exhibit the same weight, that is, 1. Once the new transaction verifies the old transaction, its weight is updated by adding the new transaction weight. Thus, the weight increases as the tree rises to genesis. The higher the cumulative weight, the more secure the transaction. This simple architectural technique makes the IOTA consensus straightforward and free of charge. For simplicity and freedom from transaction fees, IOTA has enabled microtransactions.
Regarding blockchain, people are motivated to join the network for rewards from transaction fees and block-mining incentives. However, IOTA is free of charge and does not mine blocks. This raises the question of the incentive mechanism for the network nodes. IOTA Tangle motivates nodes in two ways:
  • By joining the IOTA network, everyone can verify and write transactions, instead of relying on other nodes.
  • IOTA offers storage-utilized membership, whereby nodes are not forced to copy the full database.
Figure 2b presents an example of the IOTA network. Two types of nodes exist in the network: coordinators and nodes. The coordinator employs the company itself, which serves as the main node that controls the Tangle’s integrity. Whenever a new transaction arrives, the coordinator executes the optimal algorithm to find the most suitable transactions for verification. Employing a coordinator node prevents the infinite verification of certain transactions and delayed conformations. This ensures that the performance of the system remains stable and reliable. Moreover, anyone can join the network as an ordinary node. When nodes inherit the Tangle database, they can copy either the entire or partial Tangle. When a partial Tangle is copied, only the necessary part is copied, which enables the writing of new transactions, as presented in the figure.
Overall, IOTA Tangle makes another breakthrough in distributed computing, enabling free-of-charge transactions. Moreover, Tangle has an incredibly scalable architecture wherein the performance increases as the network increases and the number of transactions increases. This means that there will be more transactions to be written; thus, old transactions will be verified faster. This limitless performance capability alongside storage-utilized consensus makes IOTA among the most competitive distributed networks.

3.3. Data Deletions

Distributed computing has also emerged in IoT data-processing systems—for example, edge computing and data-collection frameworks. However, integration faces problems owing to certain technology mismatches. IoT data are unpredictable and massively generated by the connected devices. Typically, the same data have different versions in which updates are required. Additionally, most IoT data are useless and must be continuously deleted. However, distributed computing requires maintaining an immutable database when working with dynamic nodes. Neither blockchain nor DAG protocols allow nodes to make modifications or deletions once the database is shared with the public. Saving massive IoT data in an immutable and distributed database would cost a lot in terms of storage efficiency. Moreover, the application possibilities are limited. Therefore, data deletions from distributed computing are the subsequent step in research to enable integration with IoT data processing.
Considerable advancement has occurred in adapting the blockchain architecture to further enable data modifications. To achieve this, Littichain [29] proposed a block-wise deletion method by eliminating the blocks from the end of the chain. As the blockchain stores the blocks in linked order, deleting the random block is impossible and strongly against blockchain rules. However, the blocks can be deleted if no child block referring to them exists. This means that writing the blocks in deletion order would enable block deletions while following the blockchain protocol. Littichain accomplishes this goal by employing the following two types of ordering in the block structure:
  • Arrival time ordering maintains the physical order of the blocks and follows the blockchain protocol.
  • Expiration time ordering locates the blocks by expiring time and enables the deletion from the last in the chain.
Moreover, several other studies have been introduced to enable data deletions in blockchains in numerous ways. However, most proposed systems suffer from performance and storage utilization issues regarding instant deletions. As the blockchain greatly depends on the network confirmations for the block creations, deletion also faces the same issue, thereby precipitating unwanted delays.

3.4. Motivations

As mentioned above, distributed computing protocols must be adapted to the IoT data environment with the possibility of data deletion. Although a considerable breakthrough in employing data deletion techniques for blockchain architecture has occurred, instant deletions are not yet possible. By contrast, IOTA Tangle offers an outstanding alternative for the blockchain as a distributed storage system. Following the simple consensus rule, Tangle achieves an almost limitless capability for processing data. These observations motivated this study to enable data deletion in the Tangle blockchain. As Tangle stores each data separately instead of storing it in the blocks, it creates a great opportunity for employing selective deletions. In other words, employing deletions in the Tangle protocol eliminates the need for preprocessing to locate the data in the blocks by expiration time. Moreover, Tangle offers a limitless leaf node possibility. Several final transactions are possible in the architecture. However, they still do not conflict with the consensus protocol. Therefore, Tangle allows the data to be located by expiration time, as desired for the system. This study proposes a deletion-based Tangle (D-Tangle) architecture that enables all aforementioned motivations.

4. D-Tangle Architecture

Similar to the IOTA, D-Tangle relies on a coordinator node to maintain consensus rules. The coordinator determines the most optimal transactions to verify each new transaction. This also ensures that written transactions can be verified as soon as possible without delay. This allows the network to follow integrity rules and avoid grouping over transaction verifications. Based on the D-Tangle consensus, the lifetime of the data can be determined in these steps:
  • New data arrives with a specific expiration time.
  • Based on the expiration time, the coordinator node saves the data using climb-up write technique.
  • While writing the new transaction, the coordinator node checks the lifetimes of old transactions and performs deletion on expired ones.
The climb-up write technique plays a crucial role in D-Tangle consensus, and there are some challenges that need to be considered for the implementation. Firstly, the dynamic IoT data behavior needs to be studied carefully. Even though the deletions are a necessary feature in the distributed storage, there are still valuable data which need to be immutable. Moreover, the lifetime cannot be determined for all data in advance. Thus, the employed technique needs to address the employed workload needs. Secondly, the storage utilization is crucial in the distributed network. Therefore, keeping the expired data for a long time would increase the maintenance cost. Employing instant deletions would avoid this unwanted extra cost. The rest of this section lists the detailed discussions on the D-Tangle architecture and how it solves the above-mentioned challenges.

4.1. Data Deletions

Each node in the D-Tangle architecture represents independent data or transactions saved in a distributed network. Therefore, here, the terms “node”, “data”, and “transaction” are used interchangeably. Addressing the first challenge related to data expiration behavior, D-Tangle categorizes the transactions into the following three types:
  • Finite—the nodes whose expiration time is provided upon insertion.
  • Immutable—the nodes with no expiration time that need to be saved forever.
  • Unknown—the nodes with an unspecified expiration time that may or may not expire.
Finite and immutable types are easy to process because the calculation can be performed using traditional arithmetic. However, this unknown type is the most challenging to implement. As IoT data are unpredictable and dynamic, defining expiration time in advance is not always possible. A possibility exists that the data entail unknown decisions that are either saved forever or deleted sometime in the future. The expiration time and other constraints render the unknown type of data unpredictable; therefore, instant deletions cannot be guaranteed while maintaining the distributed system consensus. However, a reliable system must consider both cases and enable the possibility of both immutability and deletion. As deletions are significantly connected with how to locate the nodes while writing them, the optimal write technique is also important. To achieve this goal, D-Tangle employs a climb-up write technique that fully satisfies deletion requirements while maintaining consensus rules. Employing all these features, Figure 3 presents the three possible cases using the climb-up write technique.
Figure 3a presents the initial state with an example D-Tangle graph. The graph starts with the genesis node, as in the IOTA architecture. The write technique is optimized such that the immutable nodes are automatically written immediately after the genesis node in creation time order. After the immutable nodes, the unknown expired time nodes are saved. When verifying these nodes, the D-Tangle strictly avoids verifying another unknown type. Unknown nodes may or may not be deleted. If the parent node needs to be deleted and the child node is stalled, it would cause a forever delay that prevents performing the delete operation. Following the unknown types, finite nodes are saved as ordered by the expiration time. In this manner, D-Tangle achieves instant deletions on finite nodes. If a deletion is required on the unknown nodes, they need to wait for their children to expire to perform deletions. Although a delay on unknown nodes is inevitable, D-Tangle guarantees that the operation is performed eventually. In the graph, some nodes may have only one child but are still noted as verified nodes, which may appear similar to the Tangle rule. However, the node’s child has verifications from other nodes; hence, all the ways are affected by the cumulative weight, and the nodes are verified automatically. In the perfect case, the new node has a shorter expiration time than the leaf nodes and only verifies them (see Figure 3a).
However, the IoT data environment exhibits several complications. Figure 3b–d present the finite, immutable, and unknown node insertion cases using the climb-up write technique. According to the traditional method, a new node supposedly verifies one of the unverified nodes. However, the new nodes have an expiration time of 50 (see Figure 3b), which is greater than that of all unverified nodes. Therefore, it climbs up to the parent nodes and verifies the parent nodes with expirations of 55 and 50. In the case of immutable node (see Figure 3c), all leaf nodes have a finite expiration time, and the new node climbs up to the parent nodes. Moreover, as unlimited nodes can only verify the unlimited nodes, the technique increases up to the desired location. Finally, for the unknown node (see Figure 3d), one of the parent nodes is found in the unverified list, as the last inserted node is immutable. However, for the second parent, the technique climbs to a higher location, finding another immutable node.

4.2. Climp-up Write Technique

This section discusses climb-up write-technique implementation and its role in addressing the storage utilization challenge. To achieve integrity and maintain the original Tangle performance, the climb-up write technique follows three rules.
  • Unverified nodes have the highest priority according to creation order.
  • Expired nodes should be eliminated from Tangle.
  • New nodes verify only if the old node has a longer expiration time.
Algorithm 1 presents the procedure for the climb-up write technique following the specified rules. The procedure obtains the current state of Tangle and the new node (data) as the input parameters (lines 1 and 2). After executing the method, the new state of Tangle is returned as the output (line 3). Following the first rule, the procedure begins by obtaining the unverified nodes, as presented in line 7. Moreover, the rule states that the priority is set according to the creation time. This means that the first created node must be verified first if the other requirements are met. For this purpose, the list is sorted by creation time (timeA), as presented in line 8. After loading all variables, the unverified list is iterated to check for other requirements, as presented in lines 11 to 20. First, the expiration time is examined to determine whether the node lifetime is already expired (Line 13). The expired nodes are eliminated from the tangle (line 14) following the second rule. When the eliminate function is called, its parents are examined for the cumulative weight and verification constraints. The nodes that remain childless are reintegrated into the unverified list. The implementation of the third rule is shown in lines 15–20. This condition is examined to determine whether the new data expiration time is smaller than the candidate. If this condition is satisfied, verification is confirmed. In this scenario, the procedure iterates all unverified nodes to check for verification and expiration. The remainder of the procedure occurs only if the unverified nodes do not satisfy certain requirements. This can occur whenever the new node has a longer expiration time than all unverified nodes (line 22). In this manner, the procedure climbs up to the parent nodes, looking for a suitable candidate according to the expiration time. In this step, the nodes are not required to check for expiration because deletion occurs only among the leaf nodes. In other words, the parent nodes are guaranteed to have a longer expiration time; therefore, there is no need to confirm. Moreover, the parent nodes are already verified by leaf nodes. Therefore, there is no need to examine the cumulative weights. The procedure searches for the upper parents and finds a suitable node for verification (lines 24 and 25). When a node is found, the pointer is set to the candidate node and continues. In conclusion, new data are added to the unverified list after obtaining the desired pointers (line 31).
Algorithm 1. Climb-up write technique
1:
Input:  tangle → current state of Tangle
2:
       data → new Data to write
3:
Output:out → new Tangle including new Data
4:
5:
procedure
6:
7:
listtangle.getUnverifiedList()
8:
list.sort(item=>item.timeA, ASC)
9:
tDdata.timeE
10:
11:
forint i = 0; i < list.size(); i++do
12:
tN list.get(i).timeE
13:
if tN <= CURRENT then
14:
  tangle.remove(list.get(i))
15:
if tD < tN then
16:
  if data.parent1 == NULL then
17:
    data.parent1 = list.get(i)
18:
  else
19:
    data.parent2 = list.get(i)
20:
  continue
21:
22:
whiledata.parent1 == NULL || data.parent2 == NULLdo
23:
node list.next()
24:
while tD > node.parent.timeE do
25:
  node node.parent
26:
if data.parent1 == NULL then
27:
  data.parent1 = node
28:
else
29:
  data.parent2 = node
30:
31:
tangle.getUnverifiedList().append(data)
32:
return tangle as out
As stated in line 14, the newly employed write technique also considers updating the list of unverified data after assigning the desired parents. This approach enables the D-Tangle to achieve the data deletions easily while verifying the new transactions free of extra computational cost.

4.3. Theoretical Anaylsis

This section presents a detailed discussion of the theoretical analysis of D-Tangle architecture based on three factors:
  • Write efficiency. The implementation of the climb-up write technique may seem to stall the traditional write mechanism in terms of deleting consideration. However, this does not considerably affect the performance. In a distributed system, the delay is predominantly caused by network communication and waiting for the response during propagation. In the D-Tangle case, the climb-up write technique operates only on the coordinator node and runs several extra logics compared with traditional logic. Delete operations are confirmed by consensus, similarly to writing new data. Therefore, there is no considerable delay in the write mechanism to maintain traditional verification performance.
  • Delete efficiency. Deletions can be categorized in two ways as follows: finite data and unknown data deletions. Considering the climb-up write technique, the finite data is automatically sorted such that the first to delete nodes is invariably saved as leaves. Therefore, deletions in finite data can be performed instantly. However, unknown data can be validated by finite data, irrespective of the expiration time. If a delete request occurs, the unknown data are expected to wait until the child node expires. Estimating the possible delay for unknown lifetime data is unlikely. However, this possibility can be estimated depending on the workload configuration. Considering the workload of none or full unknown data, the latency is expected to decrease because the unknown data do not validate each other. Therefore, the worst performance could occur in the mixed (half finite, half unknown) workloads, wherein the validation occurs for each unknown datum. In this case, the latency is fully dependent on the expiration time of the finite data.
  • Storage efficiency. Another important factor for distributed storage reliability is efficiency. Remarkably, the traditional Tangle is an efficiency-friendly architecture that implements partial copies. To maintain the same scenario, the following architecture should also be storage efficient in implementation: when enabling deletions, the meaning of the storage efficiency changes; then, efficiency refers to how fast used storage can be freed up after requesting data deletions. Considering the unpredictable case of unknown lifetime data, storage may be allocated for slightly longer than expected during deletions. However, D-Tangle eventually guarantees deletion. Therefore, storage is also freed. Considering that the traditional approach forces the allocation of storage forever, freeing up after a delay seems much more efficient in real-life applications.

5. Evaluations

Extensive experiments were performed to evaluate the D-Tangle deletion performance in terms of the delay and storage cost for the claimed factors. This section discusses the simulation environment setup and experimental results comprehensively.

5.1. Environment Setup

Data deletions in the Tangle architecture are a new feature for research. Traditional tangle-based architectures do not possess delete features. Therefore, the evaluation of a real, distributed network would be impossible. The optimal approach to run the experiments would be establishing a simulated network. To this end, the nodes were employed with hardware resources, as listed in Table 1. There are three types of nodes, as in the IOTA network. The coordinator node is deployed to the cloud server on the Amazon EC2 instance. Network nodes are divided into full and partial node categories. The main difference is that the full node copies the full database, and the other copies are only partially sufficient for verification purposes. Full nodes are deployed on local servers and desktop computers with normal computing resources. Moreover, Jetson AGX Xavier [41] embedded boards are employed to simulate partial nodes. Using the same servers and computers, a network of 100 nodes is simulated for the test environment.
With the working simulation network, the subsequent challenge is preparing a suitable workload for evaluation. As the main claim of the D-Tangle implementation is enabling data deletions, workloads are also prepared accordingly. The immutable data type is not deleted from the network, and the insertion process is similar to that of other types. Therefore, the evaluation workloads focus only on unknown and finite data types. Three different workloads are prepared for the experiments, as presented in Figure 4. The first two focus on writing entries with a linear order deletion time. Workload C writes data with random deletions. The default configuration of workloads includes only finite data with predefined lifetimes. Subsequently, the proportion of unknown data increased linearly with respect to the workload. Thus, the specified percentage of the workload was inserted as unknown data, and the delete request was performed at the workload-defined time. This type of variable configuration helps observe the D-Tangle behavior in different scenarios with different lifetimes.

5.2. Evaluation Results

The results for the average deletion latency were recorded for all the workloads, as presented in Figure 5. Extensive evaluations are performed on each workload by increasing the unknown data percentage from 0% to 95%. The proportion of unknown data was distributed uniformly throughout the workload to ensure fairness in the results. The same scenario was repeated four times with different data sizes to further determine the differences in behavior. The workload was prepared using small-sized data, where each sampling had an average of 500 entries. The number of entries is configured accordingly to simulate a total workload of 256GB, 512GB, 1TB, and 2TB of data. Experiments with the same configuration were performed on all workloads and the integration of B + A for better conclusions. Figure 5a presents the results for Workload A. As the results confirm, D-Tangle achieves an almost instant deletion performance in this workload. Workload A focuses on deleting the data in decreasing order; thus, the subsequent item invariably verifies the leaf node. This proves that although the unknown data proportion increases, the last inserted data are deleted first invariably, thus avoiding any delay. Although the latency seems to increase linearly, forming an average delay of 5.2 s for all cases, the general results can be assumed as an almost instant delete performance. Therefore, Workload A can be considered the most suitable environment for the D-Tangle architecture for both finite and unknown data for any combination. Figure 5b presents the evaluation results for Workload B. The results formed a triangular view with the worst performance between 40% and 50% of the unknown data proportion. As Workload B is the opposite of Workload A, clearly, a longer latency exists in the evaluations. However, the latency decreases significantly when the proportion of unknown data increases up to 95%. Theoretically, it may seem that the latency should increase linearly. However, we should also consider that the results are based on the average of the entire workload. Indubitably, a small proportion of data have extremely long latency; however, the overall average decreases owing to a high unknown proportion. As the amount of unknown data increases, they do not verify each other, thus verifying only the immutable data. Moreover, a small chance exists that finite data will verify all the unknown data because the proportion decreases. Therefore, deletions can be performed faster in most cases.
As workloads A and B contradict each other, running more evaluations with a combination of both would help obtain a larger image. The combination of A+B would be the same as running Workload B itself. However, the combination of B+A opens a bigger overview of the D-Tangle performance. Figure 5c presents the results of this evaluation. Compared with Workload B, the combination workload forces D-Tangle to verify the unknown data, forming a more parabolic image. The work performance increases from 120 to 140 s. However, the general image is maintained with low latency in the minimum and maximum unknown data proportions. A similar image with a smaller height cane is noted for Workload C. Overall, the image is similar to the other results. However, the worst delay is recorded at 100 s. From these results, the combination of B+A, assumedly, creates the worst case of a random workload with specific configurations.
In all the evaluations, it is easy to observe that the initial latency for all workloads and data sizes is at 0 with 0% unknown data size. This means that considering the workload of only immutable and finite lifetime data, D-Tangle achieves perfect performance with instant deletions. Although the overall image exhibits a latency of up to 120 s, for unknown data, all data are eventually deleted. Therefore, D-Tangle achieves another goal by guaranteeing on-request deletions for the unknown lifetime data. Therefore, by employing the deletion technique, D-Tangle achieves the first two claims on deletion performance.

5.3. Storage Cost

Additional results were recorded on storage usage to prove the effectiveness of the D-Tangle protocol. To demonstrate the storage cost and efficiency of the D-Tangle architecture, the following equation was used: database size/workload size × 100%. Ideally, the database size should always be equal to the workload size. In this case, storage usage would always be equal to 100%. However, owing to delays in the deletions, the database size may increase compared with the workload size. This is when the data are deleted from the workload but cannot be deleted from the database, owing to dependency delays. Figure 6 presents the overall results of storage usage based on the aforementioned criteria. Four workloads were prepared with two different workload configurations, C. As Workload C is a random workload, the delete request on the unknown data can be called anytime between intervals. Therefore, the configurations were made for the early and late calls of the deleted requests to observe more behavior. The results present almost constant storage usage (between 100% and 110%) for cases when the unknown data proportion is low or high. The worst performance was observed with a 50% unknown data proportion for Workloads B and C (with early deletion requests). However, when Workload C is configured such that the delete request is performed late relative to the finite data, the performance stabilizes even for a 50% proportion.
Overall, on the evaluation results, D-Tangle achieves instant deletions on finite data while guaranteeing deletions on unknown data. Moreover, when the unknown data are integrated in the workloads, the D-Tangle performance is uniform for different workloads with a parabolic view. As the data size changes, owing to the sampling size and network processing, the performance changes uniformly, thereby maintaining the overall image. Although the average latency creates a considerable difference in different workload configurations, the storage cost does not increase proportionally. This proves that latency is caused by only a small proportion of the workload. These observations demonstrate the effectiveness of D-Tangle in IoT data-oriented environments with zero transaction fees.

5.4. Comparison

To prove the delete efficiency of D-Tangle over traditional blockchain-based approaches, more evaluations have been performed between D-Tangle, Littichain [29], and SDM [30]. As both Littichain and SDM performances change based on the predefined configuration, we have selected two instances from both projects with the difference configuration. For this, Littichain instances were selected with k = 0, which avoids maintaining the arrival time ordering and focuses only on expiration time ordering. Moreover, k = ∞ configuration keeps both arrival and expire time ordering on the blocks, which may create longer delays in deletion. On the other hand, SDM is focused on creating summary blocks in certain intervals. Therefore, creating a summary block after each new block (t = 1) is the best case in which SDM may perform in terms of deletion. The configuration with t = 10 is also taken for comparison as the default implementation. Considering the workloads, Littichain and SDM are focused on only finite lifetime workloads; thus, we have prepared two (finite and mixed) workloads with a random category from Figure 4. In the finite workload, only the finite-lifetime data is written, while mixed workload allows all finite, immutable, and unknown lifetime data. The evaluation results on deletion latencies are shown in Figure 7 below.
The result on finite workload shows that D-Tangle and Littichain (k = 0) achieve the instant deletions. SDM (t = 1) follows the list with an average of 10 s delay. Moreover, the default implementations of Littichain and SDM show the longest latency between 40 s and 50 s, respectively. More evaluation on mixed workloads shows a wider view of performances. D-Tangle performs the deletions on unknown data at an average of 30 s latency. Littichain and SDM, on the other hand, are only focused on finite workloads, treat the unknown data just like immutable data, and cannot perform deletions. Since the on-demand delete operations cannot be performed on the unknown data, the average latency reaches infinity. Even though Littichain (k = 0) achieves the instant deletion, this configuration would mean giving up maintaining the physical order of the blocks, which results in vulnerabilities in terms of security. Therefore, it is safe to say that D-Tangle shows the most reliable performance while maintaining all other features on all workloads.

5.5. Write Performance

To evaluate the effect of the climb-up write technique on the transaction-write performance, more experiments were performed. For these evaluations, IOTA Tangle was selected as the parent architecture for the D-Tangle. For a better view, Bitcoin was also selected as the state-of-the-art instance of blockchain. As Bitcoin architecture is old and surely slow in data processing, Recordchain [15] was also selected as one of the late blockchain architectures that enhances the Bitcoin performance in IoT data environment. The evaluations were performed on the simulated network customized for each architecture requirement. The results are shown in Figure 8.
Bitcoin relies on heavy computations through the block-mining process; thus, the results show that Bitcoin has the worst performance. Furthermore, as the network size increases, the Bitcoin performance decreases due to confirmation and propagation delay. On the other hand, Recordchain employs a hash-based propagation architecture that avoids data processing complications and achieves the same block size for all types of transactions. For this advantage, Recordchain achieves almost constant performance for all network configurations around 700 TPS. However, by employing the DAG-architecture, IOTA Tangle shows the most promising write performance, which linearly increases as the network size gets bigger. Moreover, it is safe to say that these results are recorded only in specified workloads but do not limit the IOTA performance at these numbers. While employing the climb-up write technique, D-Tangle seems to show slower performance compared to the IOTA Tangle. Nonetheless, it shows one of the most promising and fastest write throughput over blockchain-based approaches. Considering the D-Tangle focuses on enabling deletions, the difference in writes can be considered negligible as the network size gets bigger.

6. Conclusions

This study proposes a novel solution for enabling data deletions in direct acyclic graph-based distributed storage systems. To this end, the main architecture and source of the IOTA Tangle are inherited as the base project. The IOTA protocol is updated to allow data deletions based on expiration time and on request. Within the implementation of the new features, this study proposes a new DAG architecture referred to as D-Tangle with the climb-up write technique. D-Tangle addresses two main challenges related to dynamic data behavior and storage utilization. For this end, D-Tangle categorizes data into the following three types to address all dynamic cases of the IoT environment: finite, immutable, and unknown. Moreover, the climb-up write technique guarantees instant deletions for finite lifetime data and immutability if the data has no expiration. For the case of unknown data, D-Tangle offers an on-demand deletion which can be provoked anytime by the data owner. To improve the reliability and transaction verification performance, D-Tangle allows unexpected delays while deleting the unknown data. Therefore, avoiding the delays on mixed dynamic workloads while maintaining the reliability is left for future studies. Despite the certain limitations, D-Tangle achieves promising results in terms of storage efficiency. As a result of extensive evaluations with different types of workloads, the D-Tangle proved to be the most reliable architecture in the IoT data environment.

Author Contributions

Conceptualization, K.T.; methodology, K.T.; software, K.T.; validation, K.T. and D.-H.K.; formal analysis, D.-H.K.; investigation, K.T.; resources, D.-H.K.; data curation, K.T.; writing—original draft preparation, K.T.; writing—review and editing, D.-H.K.; visualization, K.T. and D.-H.K.; supervision, D.-H.K.; project administration, D.-H.K.; funding acquisition, D.-H.K. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported in part by the National Research Foundation of Korea (NRF) grant, funded by the Korean Government (MSIT) under Grant NRF-2021R1F1A1050750, and in part by an Inha University research grant.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Wang, H.; Zheng, Z.; Xie, S.; Dai, H.-N.; Chen, X. Blockchain challenges and opportunities: A survey. Int. J. Web Grid Serv. 2018, 14, 352–375. [Google Scholar] [CrossRef]
  2. Monrat, A.A.; Schelén, O.; Andersson, K. A Survey of Blockchain from the Perspectives of Applications, Challenges, and Opportunities. IEEE Access 2019, 7, 117134–117151. [Google Scholar] [CrossRef]
  3. Eyal, I. Blockchain Technology: Transforming Libertarian Cryptocurrency Dreams to Finance and Banking Realities. Computer 2017, 50, 38–49. [Google Scholar] [CrossRef]
  4. Nakomoto, S. Bitcoin: A Peer-to-Peer Electronic Cash System. 2008. Available online: http:/bitcoin.org/bitcoin.pdf (accessed on 18 October 2022).
  5. Bach, L.M.; Mihaljevic, B.; Zagar, M. Comparative analysis of blockchain consensus algorithms. In Proceedings of the 41st International Convention on Information and Communication Technology, Electronics and Microelectronics (MIPRO), Opatija, Croatia, 21–25 May 2018; pp. 1545–1550. [Google Scholar] [CrossRef]
  6. Tulkinbekov, K.; Kim, D.H. CaseDB: Lightweight Key-Value Store for Edge Computing Environment. IEEE Access 2020, 8, 149775–149786. [Google Scholar] [CrossRef]
  7. Dutta, P.; Choi, T.-M.; Somani, S.; Butala, R. Blockchain technology in supply chain operations: Applications, challenges and research opportunities. Transp. Res. Part E Logist. Transp. Rev. 2020, 142, 102067. [Google Scholar] [CrossRef]
  8. Kim, J.-S.; Shin, N. The Impact of Blockchain Technology Application on Supply Chain Partnership and Performance. Sustainability 2019, 11, 6181. [Google Scholar] [CrossRef] [Green Version]
  9. Wang, Y.; Huirong Chen, C.; Zghari-Sales, A. Designing a blockchain enabled supply chain. Int. J. Prod. Res. 2021, 59, 1450–1475. [Google Scholar] [CrossRef]
  10. Park, J.H.; Park, J.H. Blockchain Security in Cloud Computing: Use Cases, Challenges, and Solutions. Symmetry 2017, 9, 164. [Google Scholar] [CrossRef] [Green Version]
  11. Awadallah, R.; Samsudin, A.; The, J.S.; Almazrooie, M. An Integrated Architecture for Maintaining Security in Cloud Computing Based on Blockchain. IEEE Access 2021, 9, 69513–69526. [Google Scholar] [CrossRef]
  12. Agbo, C.C.; Mahmoud, Q.H.; Eklund, J.M. Blockchain Technology in Healthcare: A Systematic Review. Healthcare 2019, 7, 56. [Google Scholar] [CrossRef] [Green Version]
  13. Hölbl, M.; Kompara, M.; Kamišalić, A.; Nemec Zlatolas, L. A Systematic Review of the Use of Blockchain in Healthcare. Symmetry 2018, 10, 470. [Google Scholar] [CrossRef] [Green Version]
  14. Mettler, M. Blockchain technology in healthcare: The revolution starts here. In Proceedings of the IEEE 18th International Conference on e-Health Networking, Applications and Services (Healthcom), Munich, Germany, 14–16 September 2016; pp. 1–3. [Google Scholar] [CrossRef]
  15. Tulkinbekov, K.; Kim, D.-H. Blockchain-enabled Approach for Big Data Processing in Edge Computing. IEEE Internet Things J. 2022, 9, 18473–18486. [Google Scholar] [CrossRef]
  16. He, Y.; Wang, Y.; Qiu, C.; Lin, Q.; Li, J.; Ming, Z. Blockchain-Based Edge Computing Resource Allocation in IoT: A Deep Reinforcement Learning Approach. IEEE Internet Things J. 2021, 8, 2226–2237. [Google Scholar] [CrossRef]
  17. Guo, S.; Dai, Y.; Guo, S.; Qiu, X.; Qi, F. Blockchain Meets Edge Computing: Stackelberg Game and Double Auction Based Task Offloading for Mobile Blockchain. IEEE Trans. Veh. Technol. 2020, 69, 5549–5561. [Google Scholar] [CrossRef]
  18. Marjani, M. Big IoT Data Analytics: Architecture, Opportunities, and Open Research Challenges. IEEE Access 2017, 5, 5247–5261. [Google Scholar] [CrossRef]
  19. Dennis, R.; Owenson, G.; Aziz, B. A Temporal Blockchain: A Formal Analysis. In Proceedings of the 2016 International Conference on Collaboration Technologies and Systems (CTS), Orlando, FL, USA, 31 October–4 November 2016; pp. 430–437. [Google Scholar] [CrossRef] [Green Version]
  20. El Khanboubi, Y.; Hanoune, M.; El Ghazouani, M. A New Data Deletion Scheme for a Blockchain-based De-duplication System in the Cloud. Int. J. Commun. Netw. Inf. Secur. IJCNIS 2021, 13, 331–339. [Google Scholar] [CrossRef]
  21. Zhu, Q.; Kouhizadeh, M. Blockchain Technology, Supply Chain Information, and Strategic Product Deletion Management. IEEE Eng. Manag. Rev. 2019, 47, 36–44. [Google Scholar] [CrossRef]
  22. Li, C.; Hu, J.; Zhou, K.; Wang, Y.; Deng, H. Using Blockchain for Data Auditing in Cloud Storage. In Proceedings of the International Conference on Cloud Computing and Security—ICCCS, Haikou, China, 8–10 June 2018; Sun, X., Pan, Z., Bertino, E., Eds.; Lecture Notes in Computer Science (LNISA). Springer: Cham, Switzerland, 2018; Volume 11065. [Google Scholar] [CrossRef]
  23. Yang, C.; Chen, X.; Xiang, Y. Blockchain-based publicly verifiable data deletion scheme for cloud storage. J. Netw. Comput. Appl. 2018, 103, 185–193. [Google Scholar] [CrossRef]
  24. Politou, E.; Casino, F.; Alepis, E.; Patsakis, C. Blockchain Mutability: Challenges and Proposed Solutions. IEEE Trans. Emerg. Top. Comput. 2021, 9, 1972–1986. [Google Scholar] [CrossRef] [Green Version]
  25. Kuperberg, M. Towards Enabling Deletion in Append-Only Blockchains to Support Data Growth Management and GDPR Compliance. In Proceedings of the IEEE International Conference on Blockchain (Blockchain), Rhodes, Greece, 2–6 November 2020; pp. 393–400. [Google Scholar] [CrossRef]
  26. Buterin, V. A Next-Generation Smart Contract and Decentralized Application Platform. Whitepaper. 2014. Available online: https://ethereum.org/en/whitepaper/ (accessed on 18 October 2022).
  27. IOTA Research Papers. Available online: https://www.iota.org/foundation/research-papers (accessed on 18 October 2022).
  28. Benčić, F.M.; Podnar Žarko, I. Distributed Ledger Technology: Blockchain Compared to Directed Acyclic Graph. In Proceedings of the 2018 IEEE 38th International Conference on Distributed Computing Systems (ICDCS), Vienna, Austria, 2–6 July 2018; pp. 1569–1570. [Google Scholar] [CrossRef] [Green Version]
  29. Pyoung, C.K.; Baek, S.J. Blockchain of Finite-Lifetime Blocks with Applications to Edge-Based IoT. IEEE Internet Things J. 2020, 7, 2102–2116. [Google Scholar] [CrossRef]
  30. Hillmann, P.; Knüpfer, M.; Heiland, E.; Karcher, A. Selective Deletion in a Blockchain. In Proceedings of the 2020 IEEE 40th International Conference on Distributed Computing Systems (ICDCS), Singapore, 29 November–1 December 2020; pp. 1249–1256. [Google Scholar] [CrossRef]
  31. Guo, F.; Xiao, X.; Hecker, A.; Dustdar, S. Characterizing IOTA Tangle with Empirical Data. In Proceedings of the GLOBECOM 2020—2020 IEEE Global Communications Conference, Taipei, Taiwan, 7–11 December 2020; pp. 1–6. [Google Scholar] [CrossRef]
  32. Bu, G.; Gürcan, Ö.; Potop-Butucaru, M. G-IOTA: Fair and confidence aware tangle. In Proceedings of the IEEE INFOCOM 2019—IEEE Conference on Computer Communications Workshops (INFOCOM WKSHPS), Paris, France, 29 April–2 May 2019; pp. 644–649. [Google Scholar] [CrossRef] [Green Version]
  33. Bhandary, M.; Parmar, M.; Ambawade, D. A Blockchain Solution based on Directed Acyclic Graph for IoT Data Security using IoTA Tangle. In Proceedings of the 2020 5th International Conference on Communication and Electronics Systems (ICCES), Coimbatore, India, 10–12 June 2020; pp. 827–832. [Google Scholar] [CrossRef]
  34. Shabandri, B.; Maheshwari, P. Enhancing IoT Security and Privacy Using Distributed Ledgers with IOTA and the Tangle. In Proceedings of the 2019 6th International Conference on Signal Processing and Integrated Networks (SPIN), Noida, India, 7–8 March 2019; pp. 1069–1075. [Google Scholar] [CrossRef]
  35. Gangwani, P.; Perez-Pons, A.; Bhardwaj, T.; Upadhyay, H.; Joshi, S.; Lagos, L. Securing Environmental IoT Data Using Masked Authentication Messaging Protocol in a DAG-Based Blockchain: IOTA Tangle. Future Internet 2021, 13, 312. [Google Scholar] [CrossRef]
  36. Mukhopadhyay, U.; Skjellum, A.; Hambolu, O.; Oakley, J.; Yu, L.; Brooks, R. A brief survey of Cryptocurrency systems. In Proceedings of the 2016 14th Annual Conference on Privacy, Security and Trust (PST), Auckland, New Zealand, 12–14 December 2016; pp. 745–752. [Google Scholar] [CrossRef]
  37. Polygon Whitepaper, Ethereum’s Internet of Blockchains. Available online: https://polygon.technology/lightpaper-polygon.pdf (accessed on 18 October 2022).
  38. Chia Business Whitepaper. Available online: https://www.chia.net/whitepaper/ (accessed on 18 October 2022).
  39. Yekovenko, A. Solana: A New Architecture for a High Performance Blockchain v0.8.13. Available online: https://solana.com/solana-whitepaper.pdf (accessed on 18 October 2022).
  40. Donet Donet, J.A.; Pérez-Solà, C.; Herrera-Joancomartí, J. The Bitcoin P2P Network. In Proceedings of the International Conference on Financial Cryptography and Data Security—FC 2014, Christ Church, Barbados, 7 March 2014; Böhme, R., Brenner, M., Moore, T., Smith, M., Eds.; Lecture Notes in Computer Science (LNSC). Springer: Berlin/Heidelberg, Germany, 2014; Volume 8438. [Google Scholar] [CrossRef]
  41. Nvidia Developer Website. Available online: https://developer.nvidia.com/embedded/jetson-agx-xavier-developer-kit (accessed on 18 October 2022).
Figure 1. Traditional blockchain architecture.
Figure 1. Traditional blockchain architecture.
Electronics 11 03488 g001
Figure 2. Tangle structure and network.
Figure 2. Tangle structure and network.
Electronics 11 03488 g002
Figure 3. Climb-up write cases in D-Tangle.
Figure 3. Climb-up write cases in D-Tangle.
Electronics 11 03488 g003
Figure 4. Workload setup.
Figure 4. Workload setup.
Electronics 11 03488 g004
Figure 5. Evaluation results.
Figure 5. Evaluation results.
Electronics 11 03488 g005
Figure 6. Storage usage.
Figure 6. Storage usage.
Electronics 11 03488 g006
Figure 7. Delete latencies comparison.
Figure 7. Delete latencies comparison.
Electronics 11 03488 g007
Figure 8. Write performance.
Figure 8. Write performance.
Electronics 11 03488 g008
Table 1. Evaluation environment setup.
Table 1. Evaluation environment setup.
TypeNameCPUDRAMStorage
CoordinatorAmazon EC2
(i3en.xlarge)
4 vCPUs, 2.5 GHz32 GB5 TB
Full nodePC8 AMD Ryzen 7 1700 CPUs 3.0GHz16 GB3 TB
Local server2 Intel Core i5 CPUs, 3.3GHz8 GB3 TB
Partial nodeJetson AGX Xavier
embedded board
4 ARMv8 Processors16 GB1 TB
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Tulkinbekov, K.; Kim, D.-H. Deletion-Based Tangle Architecture for Edge Computing. Electronics 2022, 11, 3488. https://doi.org/10.3390/electronics11213488

AMA Style

Tulkinbekov K, Kim D-H. Deletion-Based Tangle Architecture for Edge Computing. Electronics. 2022; 11(21):3488. https://doi.org/10.3390/electronics11213488

Chicago/Turabian Style

Tulkinbekov, Khikmatullo, and Deok-Hwan Kim. 2022. "Deletion-Based Tangle Architecture for Edge Computing" Electronics 11, no. 21: 3488. https://doi.org/10.3390/electronics11213488

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop