Next Article in Journal
DynDL: Scheduling Data-Locality-Aware Tasks with Dynamic Data Transfer Cost for Multicore-Server-Based Big Data Clusters
Next Article in Special Issue
Background Knowledge Based Multi-Stream Neural Network for Text Classification
Previous Article in Journal
Field Experiments to Evaluate Thermal Performance of Energy Slabs with Different Installation Conditions
Previous Article in Special Issue
A Chaotic System with Infinite Equilibria and Its S-Box Constructing Application
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

An Efficient Routing Protocol Using the History of Delivery Predictability in Opportunistic Networks

1
School of Electronic Engineering, Soongsil University, 369 Sangdo-ro, Dongjak-gu, Seoul 06978, Korea
2
Department of Information and Telecommunication Engineering, Graduate School, Soongsil University, 369 Sangdo-ro, Dongjak-gu, Seoul 06978, Korea
*
Author to whom correspondence should be addressed.
Appl. Sci. 2018, 8(11), 2215; https://doi.org/10.3390/app8112215
Submission received: 10 September 2018 / Revised: 27 October 2018 / Accepted: 2 November 2018 / Published: 10 November 2018
(This article belongs to the Special Issue Applied Sciences Based on and Related to Computer and Control)

Abstract

:

Featured Application

The proposed routing protocol can be applied to enable communication in an extreme environment, where network infrastructure is not available.

Abstract

In opportunistic networks such as delay tolerant network, a message is delivered to a final destination node using the opportunistic routing protocol since there is no guaranteed routing path from a sending node to a receiving node and most of the connections between nodes are temporary. In opportunistic routing, a message is delivered using a ‘store-carry-forward’ strategy, where a message is stored in the buffer of a node, a node carries the message while moving, and the message is forwarded to another node when a contact occurs. In this paper, we propose an efficient opportunistic routing protocol using the history of delivery predictability of mobile nodes. In the proposed routing protocol, if a node receives a message from another node, the value of the delivery predictability of the receiving node to the destination node for the message is managed, which is defined as the previous delivery predictability. Then, when two nodes contact, a message is forwarded only if the delivery predictability of the other node is higher than both the delivery predictability and previous delivery predictability of the sending node. Performance analysis results show that the proposed protocol performs best, in terms of delivery ratio, overhead ratio, and delivery latency for varying buffer size, message generation interval, and the number of nodes.

1. Introduction

In opportunistic delay tolerant networks (DTN), where end-to-end routing path is not available, a message is delivered using forwarding between nodes when they contact each other [1,2,3,4,5]. Example application scenarios for DTN include disaster environments, where the infrastructure is destroyed and thus, communication should be carried out using sparsely distributed peer nodes. In DTN, when a node generates a message, it stores the message in its buffer and carries it while moving. Then, if a node contacts another node opportunistically, it forwards the message to the node if the forwarding condition is satisfied. This kind of message delivery is called a ‘store-carry-forward’ strategy [1,2,3,4,5].
Figure 1 shows a DTN environment, where the network is fragmented and connectivity between a source node and a destination node is disrupted. As shown in Figure 1, only partial connections between neighboring nodes are available. Figure 2 shows a ‘store-carry-forward’ strategy in DTN environments, in order to enable message delivery between any two nodes. When node A has a message to deliver, where the destination node of the message is node C, it firstly stores the message in its buffer. Then it carries the message while moving and forwards it to node B when node A and the node B are within communication range. After that, node B stores the message in its buffer and moves while carrying it until it contacts node C. The message is finally delivered to node C from node B using opportunistic contact.
Many works related to DTN protocols have been carried out [1,2,3,4,5,6,7,8]. In epidemic protocols [6], a node forwards a message to other nodes whenever they are within the communication range. It is very simple but a high message overhead is a major drawback due to the limited buffer space of mobile nodes. In the spray and wait protocol [7], the number of total message copies for each newly generated message is limited by L to reduce the message overhead. Therefore, only L-1 copies of the messages can be disseminated to other nodes from the message generation node in the spray phase. In the wait phase, L nodes with have only one message copy delivers the message to the destination node of the message only. However, in the spray phase, message copies are disseminated blindly, without a predefined criterion. In the probabilistic routing protocol for intermittently connected networks (PRoPHET) [8], the delivery predictability between two nodes, which has a value between 0 and 1, is calculated based on contact history between them, and a message is forwarded based on the delivery predictability. The delivery predictability of a node A to a node B, P(A,B), estimates a likelihood of delivering a message to a node B. Therefore, it is highly likely that if a node has a higher delivery predictability, it is a better node to deliver the message. If two nodes contact, the delivery predictability between them increases, as defined in Equation (1) [8]:
P ( A , B ) = P ( A , B ) old + ( 1 Δ P ( A , B ) old ) × P encounter ,
where P encounter and Δ are a scaling factor and parameter defining an upper bound of P(A,B), respectively. As time passes on after the last contact, the delivery predictability between node A and B decreases, as defined in Equation (2) [8]:
P ( A , B ) = P ( A , B ) old × γ K ,
where γ and K are the aging constant an the number of passed time units after the last contact, respectively. Additionally, there is a transitive property, i.e., if node A and node B contact frequently, and if node B and node C contact frequently, it is highly probable that node A and node C contact frequently, as defined in Equation (3) [8]:
P ( A , C ) = MAX   ( P ( A , C ) old ,   P ( A , B ) P ( B , C ) recv × β ) ,
where β is a scaling constant. In the PRoPHET protocol, when two nodes contact, i.e., when they are within the transmission range, they exchange summary vectors which include all the delivery predictability values of all previous contact nodes [8].
In this paper, the PRoPHET protocol is considered as a base protocol. In PRoPHET, several forwarding strategies are defined. In the GRTR strategy, for example, a message to a destination node to D is forwarded to node B from node A, if P(B,D) is larger than P(A,D). In the GTMX strategy, a message is forwarded if P(B,D) is larger than P(A,D) and the number of forwarding of the message considered is smaller than a threshold value NFmax. In GRTR+, a message is forwarded to if P(B,D) is larger than P(A,D) and P(B,D) is larger than Pmax, which is the largest delivery predictability stored by a node to which the message has been forwarded to ever. We note that GRTR, GTMX, and GRTR+ are just mnemonics [8].
In this paper, we propose an efficient opportunistic routing protocol using the delivery predictability of previously contacted node, i.e., the history of delivery predictability. In the proposed protocol, if a node receives a message from another node, the current value of the delivery predictability of the receiving node to the destination node of the received message is stored as previous the delivery predictability (preP) in the receiving node. We note that the value of preP of a node is defined for each destination node individually and the value of preP for a node is updated whenever it receives a message destined to the node. When two nodes contact, a message is forwarded to the other node, if the delivery predictability of the other node is higher than both the delivery predictability and preP of the sending node. If there is no value of preP for the considered message, only the current delivery predictability values are compared and a message is delivered to the other node if the delivery predictability of the other node is higher. The preP can be considered as an indicator of a good message forwarder based on the history of delivery predictability. That is, if a node A has a higher preP than the current delivery predictability of the contact node B, although node A has a smaller delivery predictability currently than that of the contact node B, it is not appropriate to disseminate the message to node B. This is because too much forwarding based on only current delivery predictabilities result in the message dropping in the buffer due to buffer overflow and, thus, the overall performance degrades.
In the PRoPHET protocol with the GRTR strategy, if a node with a message has a small delivery predictability, the message is forwarded significantly and results in the message dropping in the buffer. In order to restrict the message dissemination efficiently in the PRoPHET protocol, enhanced message forwarding strategies such as GTMX and GRTR+ were proposed. In the PRoPHET protocol with the GTMX strategy, the total number of forwarding of the message is limited by a threshold value in addition to the basic GRTR strategy comparing current delivery predictability. In the PRoPHET protocol with the GRTR+ strategy, the additional condition is that the delivery predictability of the receiving node is larger than a threshold probability, which is the largest delivery predictability reported by a node to which the message has been sent so far. In the PRoPHET protocol with the GRTR+ strategy, the threshold probability is a non-decreasing function of time and message forwarding is too restricted as time goes on, which may result in an overall performance degradation. In the proposed protocol, we also use a threshold probability of preP but it is not a non-decreasing function of time due to the aging property of the delivery predictability of the PRoPHET protocol and thus, controls the message disseminations efficiently.
The basic idea of using both delivery predictability and the previous delivery predictability of the sending node of the proposed protocol is similar to our preliminary work presented as an extended abstract in [9]. However, the proposed work was extended significantly from the previous work from the following aspects:
-
The proposed protocol is described in more detail by completing the flowchart of the proposed protocol and using an example of a message delivery to illustrate the proposed protocol.
-
The performance of the proposed protocol is newly analyzed thoroughly using simulation and compared with four other protocols in terms of the delivery ratio, overhead ratio, and delivery latency.
-
Related works are significantly extended in this paper.
The remaining part of this paper is organized as follows: In Section 2, we survey related works. In Section 3, we describe the algorithm of the proposed protocol in detail using examples. In Section 4, we analyze the performance of the proposed protocol extensively using simulations. Finally, we conclude this work in Section 5.

2. Related Works

Related works on general delay tolerant networks protocols have been already covered by the survey papers [10,11,12,13,14], where the authors classified and compared related protocols in detail. Since the main focus of our paper is related to the PRoPHET protocol, we review related works on the extension of PRoPHET protocol for performance enhancement. A lot of works related to PRoPHET protocol have been carried out [15,16,17,18,19,20,21,22,23,24,25,26]. In Reference [15], the number of message copies is limited by using the sociality of nodes in order to improve the performance of the proposed protocol. In Reference [16], the buffer is managed by controlling the delivery predictability based on the number of message copies in order to solve the network performance degradation problem due to excessive copies of a message. In Reference [17], either Epidemic or PRoPHET is selected based on the density of nodes. That is, if the number of simultaneous contacts is smaller than a threshold, the Epidemic protocol is applied. Otherwise, the PRoPHET protocol is applied. By doing this, the appropriate protocol, either Epidemic or PRoPHET, is selected dynamically based on the density of the neighboring nodes. In Reference [18], contact duration, which is the time duration that contact is maintained, and the contact frequency are also considered in the calculation of the delivery predictability for an efficient message delivery. In Reference [19], the AntPRoPHET scheme was proposed, where ant colony optimization, which is one of the nature-inspired optimization technique based on the behavior of ant, was applied to the PRoPHET protocol in order to optimize the performance of the PRoPHET protocol. In Reference [20], in order to reduce unnecessary message forwarding, the message is forwarded only when the delivery predictability of the receiving node is larger than the sum of the delivery predictability of the sending node and a minimum threshold value, and the sum of delivery predictabilities of the receiving nodes so far is less than a predefined threshold value. In PRoPHET+ Reference [21], a new metric, deliverability, which is defined as a weighted sum of battery power, location, popularity, buffer size, and the delivery predictability was proposed. In Reference [22], hop count traversed by a message as well as delivery predictability are considered to determine the next hop node. In Reference [23], the distance-based PRoPHET protocol was proposed, where the distance between any two nodes was considered additionally in order to calculate the delivery predictability. In Reference [24], a node selects a node located in the smallest distance as a forwarder in order to transmit with the highest rate. In Reference [24], the authors extended the algorithm in Reference [23] by additionally considering the community mobility model, where the random waypoint mobility model was considered in Reference [23]. In the history encounter-based spray and wait (HESnW) protocol [25], the multiple probabilities from node S to node D are defined as the multiplication of delivery predictability from node S to node H, which is a history node that node S ever contacted before, and delivery predictability from node H to node D. Then, a message is forwarded to a node in the spray phase, either if the node has a higher delivery predictability or if the node’s multiple probability is higher than the delivery predictability of the sending node when the deliver predictability of the node has higher than the transfer threshold. In Reference [26], the authors proposed an improved PRoPHET protocol using the context information of the node such as the average distance and time from the reception to the delivery of a message.
In the related works on PRoPHET protocol in References [15,16,17,18,19,20,21,22,23,24,25,26], various context information such as sociality [15], the number of message copies [16], the density of nodes [17], the contact duration [18], ant colony optimization [19], sum of delivery predictability [20], weighted sum of battery power, location, popularity, buffer size and delivery predictability [21], hop count [22], distance [23], the community mobility model [24], multiple probability [25], and the average distance and time [26] are used to decide a message forwarding. We classify the related works on extended PRoPHET protocols, depending on whether delivery predictability is modified using context-additional context information or whether additional metrics are used for the forwarding decision without modifying delivery predictability, as shown in Figure 3.

3. Proposed Protocol

In the proposed protocol, if any node receives a message from another node, the value of the delivery predictability of the receiving node to the destination node for the message is managed as preP in this paper. Then, when two nodes contact, a node with a message to send checks if the delivery predictability of the other node is higher than both the delivery predictability and the previous delivery predictability of the sending node, the message is forwarded. Otherwise, the message is not forwarded. By doing this, we introduce more restrictions on forwarding in our proposed protocol than the basic PRoPHET protocol and, thus, avoid the excessive forwarding of messages efficiently. In the proposed protocol, if a message has not been forwarded yet from a node which generated the message originally, message forwarding is carried out if the delivery predictability of the receiving node is higher than that of the sending node. This is because we need to promote message dissemination when the message is just generated and since the preP of the sending node is not defined in this case, we need to forward a message based on the basic GRTR strategy of the PRoPHET protocol.
Figure 4 shows examples of forwarding in the proposed protocol compared with that of the PRoPHET protocol with the GRTR strategy. In Figure 4a, which shows the operation of the PRoPHET protocol with GRTR strategy, node S forwards messages M1, M2, and M3 to contact node B, and node B forwards message M5 to node S at T = t0 based on the comparison of the delivery predictabilities of node S and node B. We assume a situation that the buffer of node B becomes full when it receives messages M1, M2, and M3. At T = t1, when node B contacts node C, node C forwards message M6 and one message is dropped due to a buffer overflow. Node B forwards messages M1 and M3 to node C. At T = t2, node C contacts node E and forwards messages M1, M6, and M7 to node E and receives messages M8 and M9. At this time, the buffer of node C overflows and one message is dropped. Additionally, the buffer of node B becomes full.
In Figure 4b, node S forwards messages M2 and M3 and does not forward message M1, contrary to the case in Figure 4a since the preP of message M1 of node S is higher than the delivery predictability of node B at T = t0. Node B does not forward message M5 to node S. At T = t1, node B forwards message M3 to node C and receives message M7 from node C by comparing the delivery predictability and the preP values. Node C forwards messages M6 and M7 to node E and receives message M9 from node E at T = t2. We note that message dissemination is efficiently restricted in the proposed protocol and no message drop occurs in the considered forwarding scenario, contrary to the PRoPHET protocol with the GRTR strategy, and this results in an overall performance enhancement.
Figure 5 shows the flowchart of the proposed protocol. When two nodes A and B contact, they firstly exchange summary vectors which include information of the messages they have in their buffer and all the delivery predictability values of each node to all other previously contacted nodes. We note that the values of preP are not exchanged during contact. Then node A chooses a message which does not exist in node B. If the destination of the message in node A is node B, it is delivered to node B. Otherwise, node A compares the delivery predictability of itself with that of node B. If P(A,D) is larger than P(B,D), the message is not forwarded to node B. If P(A,D) is not larger than P(B,D), node A checks if preP(A,D) exists. We note that if the message is generated in node A originally and has not been forwarded to another node before, that is, preP(A,D) does not exist. If preP(A,D) does not exist, the message is forwarded to node B since P(A,D) is not larger than P(B,D). If preP(A,D) exists, the value of P(B,D) is compared with that of preP(A,D). If P(B,D) is larger than preP(A,D), the message is forwarded to node B, and preP(B,D) is set as P(B,D). Otherwise, the message is not forwarded. If the considered message is not the last message, the same procedure applies to the remaining messages.
Figure 6 shows an example of the message delivery of the proposed protocol. Node A and node B have the table for the message list as well as the delivery predictability and preP of the other nodes. For example, in Figure 6, node A has messages M1, M2, M3, and M4. Delivery predictability for nodes B, C, D, E, and F are stored but preP is stored only for nodes C, D, and E, since we assume that node A has not exchanged message with node B and F yet. Additionally, node B has messages M5, M6, and M7. Delivery predictability for nodes A, C, D, E, and F are stored but the preP information is stored only for C, D, E, and F. As shown in Figure 6b, after contact, message M1 is not forwarded to node B since the delivery predictability of node B is not higher than node A. Message M2 is forwarded to node B since the current delivery predictability and preP for node D, which is the destination node of message M2, are higher than those of node A. M3 is delivered to node B since the destination node of M3 is node B and the information of preP and delivery predictability for node B of node A are updated. Message M4 is forwarded to node B with a rationale similar to the message M2. Only message M7 of node B is forwarded to node A since node A has a higher delivery predictability and preP.

4. Performance Analysis

In this section, we analyze the performance of the proposed protocol using an opportunistic network environment (ONE) simulator developed by Helsinki University [27,28]. Then, we compare the performance of the proposed protocol with the PRoPHET protocol with the GRTR strategy, the PRoPHET protocol with the GRTR+ strategy, the PRoPHET protocol with the GTMX strategy, and the modified HESnW [25] protocol, with a confidence interval of 95%. The modified HESnW is selected as a comparing protocol since it is one of the latest protocols using the contact history information of a mobile node. In the modified HESnW protocol, which is called HEPRoPHET hereafter, we applied the concept of multiple probability of HESnW to the forwarding decision of PRoPHET protocol. In HEPRoPHET, a message is forwarded to another node either if the node has a higher delivery predictability or if the node’s multiple probability is higher than the delivery predictability of the sending node when the deliver predictability of the node has a higher than transfer threshold. The performance of the proposed protocol is compared with the PRoPHET protocol with GRTR, PRoPHET with GRTR+, PRoPHET with GTMX, and HEPRoPHET in terms of delivery ratio, overhead ratio, and delivery latency, defined as follows:
delivery   ratio = number   of   delivered   messages number   of   created   messages
overhead   ratio = number   of   relayed   messages number   of   delivered   messages number   of   delivered   messages
delivery   latency = sum   of   delivered   messages   delay number   of   delivered   messages
We assume the parameter values for simulation as listed in Table 1, where U[x, y] denotes a uniform distribution between x and y. We also note that we randomly generate a source node and a destination node for a message out of the total 126 nodes.
Figure 7, Figure 8 and Figure 9 show the delivery ratio, overhead ratio, overhead ratio, and delivery latency for varying the buffer size of the pedestrian and car from 10 Mbytes to 90 Mbytes, where the buffer size of the trams is 10 times that of the varying buffer size of the pedestrian and car. In Figure 7, delivery ratios of all the compared protocols increase as the value of buffer size increases, because more messages can be accommodated in the buffer and thus the dropped messages decrease. Therefore, more messages can be delivered to the destination node successfully without being dropped during delivery due to the overflow of the buffer. The proposed protocol has the highest delivery ratio for all the considered buffer sizes since the proposed protocol efficiently restricts message forwarding by considering preP and, thus, the delivery ratio is the highest.
Figure 8 shows the overhead ratio for different buffer sizes. Overhead ratios of all the protocols decrease as the value of the buffer size increases since the delivery ratio increases as the buffer size increases for all the protocols and, thus, the overhead ratio decreases as can be seen in Equation (5). We note that the effect of the increased relayed messages is minor since the number of generated messages is fixed. The overhead ratio of the proposed protocol is the smallest out of all the considered protocols since the proposed protocol restricts the forwarded messages efficiently and has a smaller number of forwarded messages. It also has a higher delivery ratio.
Figure 9 shows the delivery latency for different buffer sizes. The delivery latencies of all the considered protocols increase as the value of the buffer size increases since more messages are accommodated in the buffer of nodes and, thus, longer latencies are required to deliver messages to the final destination nodes, given a limited contact duration between nodes. The proposed protocol has the smallest delivery latency since the proposed protocol restricts the forwarded messages efficiently and, thus, has a smaller number of messages in the buffer, which results in a faster message delivery to the final destination nodes.
Figure 10, Figure 11 and Figure 12 show the delivery ratio, overhead ratio, and delivery latency for varying the message generation intervals varying from U[5 s, 15 s] to U[45 s, 55 s], when the buffer sizes of the pedestrian and car are 50 Mbytes and the buffer size of tram is 500 Mbytes. In Figure 10, delivery ratios of all the compared protocols increase as the value of message generation interval increases because fewer messages are generated when the message generation interval increases and more messages can be delivered to the destination node successfully without being dropped during delivery due to the overflow of the buffer. The proposed protocol has the highest delivery ratio for all the considered buffer sizes since the proposed protocol efficiently restricts message forwarding by considering preP and, thus, the delivery ratio is the highest.
Figure 11 shows the overhead ratio. Overhead ratios of all the protocols increase as the message generation interval increases. This is because the effect of the decreased delivered messages when the message generation interval is high in Equation (5) is more dominant than that of the decreased relayed messages. The overhead ratio of the proposed protocol is the smallest out of all the considered protocols since the proposed protocol restricts the forwarded messages efficiently, has a smaller number of forwarded messages, and also has a higher delivery ratio.
Figure 12 shows the delivery latency. The delivery latencies of all the protocols decrease as message generation interval increases since higher message generation interval results in fewer messages in the buffer and thus the chance of forwarding of a message during a short contact duration is larger for a fixed buffer size. The proposed protocol has the smallest delivery latency since the proposed protocol restricts the forwarded messages efficiently and, thus, has a smaller number of messages in the buffer, which results in a faster message delivery to the final destination nodes.
Figure 13, Figure 14 and Figure 15 show the delivery ratio, overhead ratio, and delivery latency, for varying the number of trams from 3 to 27, where the number of pedestrians is 80 and the number of cars is 40, which are the default values in the ONE simulator. In Figure 13, the delivery ratios of all the protocols increase as the number of trams increases since trams carry more messages with a large enough buffer size and, thus, more messages can be delivered successfully for a larger number of trams. We note that the proposed protocol has the highest delivery ratio for all the considered buffer sizes since the proposed protocol efficiently restricts message forwarding by considering preP and, thus, the delivery ratio is the highest.
Figure 14 shows the overhead ratio. Overhead ratios of all the protocols increase as the number of trams increases since more messages are relayed due to more message forwarders. The overhead ratio of the proposed protocol is the smallest out of all the considered protocols, since the proposed protocol restricts forwarded messages efficiently and has a smaller number of forwarded messages and also has a lower overhead ratio.
Figure 15 shows the delivery latency. Delivery latencies of all the protocols decrease as the number of nodes increases since messages can be forwarded quickly with the help of more message forwarders, and delivered to destination nodes more quickly. The proposed protocol has the smallest delivery latency since the proposed protocol restricts the forwarded messages efficiently, and thus, has a smaller number of messages in the buffer, which results in a faster message delivery to final destination nodes.
Figure 16, Figure 17 and Figure 18 show the delivery ratio, overhead ratio, and delivery latency for varying the number of pedestrians from 40 to 360, where the number of trams is 6 and the number of cars is 40, which are the default values in the ONE simulator. In Figure 16, the delivery ratio of the proposed protocol increases as the number of nodes increases from 40 to 80 since more pedestrians can act as message forwarders and more messages can be delivered successfully. However, the delivery ratio of the proposed protocol slightly decreases when the number of nodes is higher than 80 since the effect of more message forwarding and the resulting message drop due to buffer overflow restricts the successful message delivery since the buffer size of the pedestrian is not sufficient. The delivery ratios of other protocols decrease significantly due to more message dropping in the buffer. We note that the proposed protocol has the highest delivery ratio for all the considered buffer sizes since the proposed protocol efficiently restricts message forwarding by considering preP and, thus, the delivery ratio is the highest.
Figure 17 shows the overhead ratio. Overhead ratios of all the protocols increase as the number of nodes increases since more messages are relayed due to more message forwarders and the rate of increase is more significant as the number of pedestrians is higher because the number of delivered messages decreases for a high number of pedestrians. The overhead ratio of the proposed protocol has the smallest out of all the considered protocols since the proposed protocol restricts the forwarded messages efficiently and has a smaller number of forwarded messages, and also has a higher delivery ratio.
Figure 18 shows the delivery latency. Delivery latencies of all the protocols decrease as the number of pedestrians increases since messages can be forwarded quickly with the help of more message forwarders and delivered to destination nodes more quickly. The proposed protocol has the smallest delivery latency since the proposed protocol restricts the forwarded messages efficiently and, thus, has a smaller number of messages in the buffer, which results in the faster message delivery to the final destination nodes.
Figure 19, Figure 20 and Figure 21 show the delivery ratio, overhead ratio, and delivery latency for varying the number of cars from 40 to 360, where the number of trams is 6 and the number of pedestrians is 80, which is the default setting in the ONE simulator. In Figure 19, the delivery ratio of the proposed protocol increases as the number of nodes increases from 40 to 120 since more cars can act as message forwarders and more messages can be delivered successfully. However, the delivery ratio of the proposed protocol slightly decreases when the number of nodes is higher than 120 since the effect of more message forwarding and the resulting message drop due to buffer overflow restricts successful message delivery since the buffer size of the car is not sufficient. The delivery ratios of other protocols decrease significantly due to more messages dropping in the buffer. We note that the proposed protocol has the highest delivery ratio for all the considered buffer sizes since the proposed protocol efficiently restricts message forwarding by considering preP and, thus, the delivery ratio is the highest. The shape of delivery ratios of all the protocols is similar to that of Figure 16, instead of that of Figure 14, since the buffer size of the car is the same as that of the pedestrian, which is significantly smaller than that of the tram.
Figure 20 shows the overhead ratio. Overhead ratios of all the protocols increase as the number of nodes increases since more messages are relayed due to more message forwarders and the rate of increase is more significant as the number of cars is higher because the number of delivered messages decreases for a high number of cars. The overhead ratio of the proposed protocol has the smallest out of all the considered protocols since the proposed protocol restricts the forwarded messages efficiently and has a smaller number of forwarded messages and also has a higher delivery ratio.
Figure 21 shows the delivery latency. Delivery latencies of all the protocols decrease as the number of cars increases since messages can be forwarded quickly with the help of more message forwarders and delivered to destination nodes more quickly. The proposed protocol has the smallest delivery latency since the proposed protocol restricts forwarded messages efficiently, and thus, has a smaller number of messages in the buffer, which results in a faster message delivery to the final destination nodes.

5. Conclusions

In this paper, we proposed an efficient opportunistic routing protocol using delivery predictability and preP. We analyzed the performance of the proposed protocol using the ONE simulator and compared with that of other comparing protocols for varying environments. Performance analyses showed that the proposed protocol performs better than the PRoPHET protocol with GRTR strategy, PRoPHET protocol with GRTR+ strategy, PRoPHET protocol with GTMX strategy, and HEPRoPHET, in terms of the delivery ratio, overhead ratio, and delivery latency for varying buffer sizes, message generation intervals, and number of nodes by efficiently restricting the forwarded messages.

Author Contributions

E.H.L., D.Y.S. and Y.W.C. conceived and designed the proposed scheme. E.H.L. and D.Y.S. conceived and designed the simulations; E.H.L. and D.Y.S. performed the simulations; D.Y.S. and Y.W.C. analyzed the data; Y.W.C. wrote the paper. Authors have read and approved the final manuscript.

Funding

This research of Dong Yeong Seo and Yun Won Chung was supported by Basic Science Research Program through the National Research Foundation of Korea (NRF) funded by the Ministry of Science, ICT & Future Planning (NRF-2016R1D1A1B03930299).

Conflicts of Interest

The author(s) declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.

References

  1. Fall, K. A delay-tolerant network architecture for challenged internets. In Proceedings of the 2003 Conference on Applications, Technologies, Architectures, and Protocols for Computer Communications (SIGCOMM ’03), Karlsruhe, Germany, 25–29 August 2003. [Google Scholar]
  2. Burleigh, S.; Hooke, A.; Torgerson, L.; Fall, K.; Cerf, V.; Durst, B.; Scott, K. Delay-tolerant networking: An approach to interplanetary internet. IEEE Commun. Mag. 2003, 41, 128–136. [Google Scholar] [CrossRef]
  3. Delay Tolerant Networking Research Group. Available online: https://irtf.org/concluded/dtnrg (accessed on 15 February 2018).
  4. Zhang, Z. Routing in intermittently connected mobile ad hoc networks and delay tolerant networks: Overview and challenges. IEEE Commun. Surv. Tutor. 2006, 8, 24–37. [Google Scholar] [CrossRef]
  5. Martin-Campillo, A.; Crowcroft, J.; Yoneki, E.; Marti, R. Evaluating opportunistic networks in disaster scenarios. J. Netw. Comput. Appl. 2013, 36, 870–880. [Google Scholar] [CrossRef]
  6. Zhang, X.; Neglia, G.; Kurose, J.; Towsley, D. Performance modeling of epidemic routing. Comput. Netw. 2007, 51, 2867–2891. [Google Scholar] [CrossRef] [Green Version]
  7. Spyropoulos, T.; Psounis, K.; Raghavendra, C.S. Spray and wait: An efficient routing scheme for intermittently connected mobile networks. In Proceedings of the ACM SIGCOMM Workshop on Delay-tolerant Networking, Philadelphia, PA, USA, 26 August 2005. [Google Scholar]
  8. Lindgren, A.; Doria, A.; Davies, E.; Grasic, S. Probabilistic routing protocol for intermittently connected networks. ACM SIGMOBILE Mob. Comput. Commun. Rev. 2003, 7, 19–20. [Google Scholar] [CrossRef]
  9. Lee, E.H.; Seo, D.Y.; Chung, Y.W. An improved forwarding strategy in opportunistic network. In Proceedings of the Future Generation Communication and Networking, Jeju, Korea, 23–25 November 2016. [Google Scholar]
  10. Khabbaz, M.; Assi, C.M.; Fawaz, W.F. Disruption-Tolerant Networking: A Comprehensive Survey on Recent Developments and Persisting Challenges. IEEE Commun. Surv. Tutor. 2012, 14, 607–640. [Google Scholar] [CrossRef]
  11. Cao, Y.; Sun, Z. Routing in Delay/Disruption Tolerant Networks: A Taxonomy, Survey and Challenges. IEEE Commun. Surv. Tutor. 2013, 15, 654–677. [Google Scholar] [CrossRef]
  12. Zhu, Y.; Xu, B.; Shi, X.; Wang, Y. A Survey of Social-Based Routing in Delay Tolerant Networks: Positive and Negative Social Effects. IEEE Commun. Surv. Tutor. 2013, 15, 387–401. [Google Scholar] [CrossRef]
  13. Wei, K.; Kiang, X.; Xu, K. A Survey of Social-Aware Routing Protocols in Delay Tolerant Networks: Applications, Taxonomy and Design-Related Issues. IEEE Commun. Surv. Tutor. 2014, 16, 556–578. [Google Scholar]
  14. Jain, S.; Chawla, M. Survey of Buffer Management Policies for Delay Tolerant Networks. J. Eng. 2014, 2014, 117–123. [Google Scholar] [CrossRef]
  15. Lee, K.H.; Park, T.M.; Kim, C.K. Social-community based DTN Routing. J. Korean Inst. Inf. Sci. Eng. 2011, 38, 366–373. [Google Scholar]
  16. Liu, Y.; Wang, J.; Zhang, S.; Zhou, H. A Buffer Management Scheme Based on Message Transmission Status in Delay Tolerant Network. In Proceedings of the IEEE Global Telecommunications Conference (GLOBECOM), Kathmandu, Nepal, 5–9 December 2011. [Google Scholar]
  17. Kim, M.J.; Chung, Y.W. An improved message delivery scheme based on node density in delay tolerant network. J. Korean Inst. Inf. Technol. 2014, 12, 69–74. [Google Scholar] [CrossRef]
  18. Lee, H.J.; Nam, J.C.; Seo, W.K.; Choi, J.I.; Cho, Y.Z. An Efficient DTN Routing Protocol with Considering Contact Duration. In Proceedings of the KICS Annual Winter Conference, Jeongseon, Korea, 21–23 January 2015. [Google Scholar]
  19. Ababou, M.; Elkouch, R.; Mellafkih, M.; Ababou, N. AntProPHET: A new routing protocol for delay tolerant networks. In Proceedings of the 2014 14th Mediterranean Microwave Symposium, Marrakech, Morocco, 12–14 December 2015. [Google Scholar]
  20. Ouadrhiri, A.E.; Kamili, M.E.; Fenni, M.R.E.; Omari, L. Learning controlled forwarding strategy improving probabilistic routing in DTNs. In Proceedings of the IEEE Wireless Communications and Networking Conference (WCNC), Istanbul, Turkey, 6–9 April 2014. [Google Scholar]
  21. Huang, T.K.; Lee, C.K.; Chen, L.J. PRoPHET+: An adaptive PRoPHET-based routing protocol for opportunistic network. In Proceedings of the 24th IEEE International Conference on Advanced Information Networking and Applications, Perth, WA, Australia, 20–23 April 2010. [Google Scholar]
  22. Lee, F.C.; Yeo, C.K. Probabilistic routing based on history of messages in delay tolerant networks. In Proceedings of the IEEE Vehicular Technology Conference, San Francisco, CA, USA, 5–8 September 2011. [Google Scholar]
  23. Sok, P.; Kim, K. Distance-based PRoPHET routing protocol in disruption tolerant network. In Proceedings of the International Conference on ICT Convergence (ICTC), Jeju, Korea, 14–16 October 2013. [Google Scholar]
  24. Sok, P.; Tan, S.; Kim, K. PRoPHET routing protocol based on neighbor node distance using a community mobility model in delay tolerant networks. In Proceedings of the IEEE International Conference on High Performance Computing and Communications & IEEE International Conference on Embedded and Ubiquitous Computing, Zhangjiajie, China, 13–15 November 2013. [Google Scholar]
  25. Gan, S.; Zhou, J.; Wei, K. HESnW: History Encounters-Based Spray-and-Wait Routing Protocol for Delay Tolerant Networks. J. Inf. Process. Syst. 2017, 13, 618–629. [Google Scholar]
  26. Baek, K.M.; Seo, D.Y.; Chung, Y.W. An Improved Opportunistic Routing Protocol Based on Context Information of Mobile Nodes. Appl. Sci. 2018, 8, 1344. [Google Scholar] [CrossRef]
  27. The Opportunistic Network Environment Simulator. Available online: http://www.netlab.tkk.fi/tutkimus/dtn/theone/ (accessed on 3 May 2018).
  28. Keranen, A.; Ott, J.; Karkkainen, T. The ONE simulator for DTN protocol evaluation. In Proceedings of the second International Conference on Simulation Tools and Techniques, Rome, Italy, 2–6 March 2009. [Google Scholar]
  29. Huang, H.; Mao, Y.; Zhang, X. Probability routing algorithm based on historical throughput in DTN network. In Proceedings of the International Conference on Consumer Electronics, Communications and Networks, Xianning, China, 20–22 November 2013. [Google Scholar]
  30. Lindgren, A.; Phanse, K. Evaluation of queueing policies and forwarding strategies for routing in intermittently connected networks. In Proceedings of the International Conference on Communication System Software and Middleware, New Delhi, India, 8–12 January 2006. [Google Scholar]
Figure 1. An example of a Delay-Tolerant-Network scenario.
Figure 1. An example of a Delay-Tolerant-Network scenario.
Applsci 08 02215 g001
Figure 2. The Store-Carry-Forward strategy.
Figure 2. The Store-Carry-Forward strategy.
Applsci 08 02215 g002
Figure 3. The classification of related works on the extended PRoPHET protocols.
Figure 3. The classification of related works on the extended PRoPHET protocols.
Applsci 08 02215 g003
Figure 4. Examples of the forwarding scenario. (a) PRoPHET with GRTR, (b) Proposed protocol.
Figure 4. Examples of the forwarding scenario. (a) PRoPHET with GRTR, (b) Proposed protocol.
Applsci 08 02215 g004
Figure 5. A flowchart of the proposed protocol.
Figure 5. A flowchart of the proposed protocol.
Applsci 08 02215 g005
Figure 6. An example of the message delivery: (a) before contact, (b) after contact.
Figure 6. An example of the message delivery: (a) before contact, (b) after contact.
Applsci 08 02215 g006
Figure 7. The delivery ratio for different buffer sizes.
Figure 7. The delivery ratio for different buffer sizes.
Applsci 08 02215 g007
Figure 8. The overhead ratio for different buffer sizes.
Figure 8. The overhead ratio for different buffer sizes.
Applsci 08 02215 g008
Figure 9. The delivery latency for different buffer sizes.
Figure 9. The delivery latency for different buffer sizes.
Applsci 08 02215 g009
Figure 10. The delivery ratio for different message generation intervals.
Figure 10. The delivery ratio for different message generation intervals.
Applsci 08 02215 g010
Figure 11. The overhead ratio for different message intervals.
Figure 11. The overhead ratio for different message intervals.
Applsci 08 02215 g011
Figure 12. The delivery latency for different message intervals.
Figure 12. The delivery latency for different message intervals.
Applsci 08 02215 g012
Figure 13. The delivery ratio for different numbers of trams.
Figure 13. The delivery ratio for different numbers of trams.
Applsci 08 02215 g013
Figure 14. The overhead ratio for different numbers of trams.
Figure 14. The overhead ratio for different numbers of trams.
Applsci 08 02215 g014
Figure 15. The delivery latency for different numbers of trams.
Figure 15. The delivery latency for different numbers of trams.
Applsci 08 02215 g015
Figure 16. The delivery ratio for different numbers of pedestrians.
Figure 16. The delivery ratio for different numbers of pedestrians.
Applsci 08 02215 g016
Figure 17. The overhead ratio for different numbers of pedestrians.
Figure 17. The overhead ratio for different numbers of pedestrians.
Applsci 08 02215 g017
Figure 18. The delivery latency for different numbers of pedestrians.
Figure 18. The delivery latency for different numbers of pedestrians.
Applsci 08 02215 g018
Figure 19. The delivery ratio for different numbers of cars.
Figure 19. The delivery ratio for different numbers of cars.
Applsci 08 02215 g019
Figure 20. The overhead ratio for different numbers of cars.
Figure 20. The overhead ratio for different numbers of cars.
Applsci 08 02215 g020
Figure 21. The delivery latency for different numbers of cars.
Figure 21. The delivery latency for different numbers of cars.
Applsci 08 02215 g021
Table 1. The simulation parameter values.
Table 1. The simulation parameter values.
ParameterValue
Area size (m2)4500 × 3400 [25,29]
Number of rngSeeds5
RouterPRoPHET Router
P encounter 0.75 [30]
γ 0.98 [30]
Δ 0
β 0.25 [30]
NFmax5
Pmax (initial value)0
Movement modelpedestrian, car: Shortest Path Map Based Movement [25]
tram: Map Route Movement [25]
Speed (m/s)pedestrian: U[0.5, 1.5] [25]
car: U[2.7, 13.9]
tram: U[7, 10]
Number of total nodes126 (default)
Number of pedestrian80 (default)
Number of cars40 (default)
Number of trams6 (default)
Simulation time (s)216,000
Transmission range (m)pedestrian, car: 10 (interface1) (default) [19]
tram: 10 (interface1), 1000 (interface2) (default) [19]
Packet transmission speed250 Kbyte/s [19]
Buffer size (MBytes)pedestrian, car: 50 (default)
tram: 500 (default)
Message interval (s)U[25 s, 35 s]
Message size (Bytes)U[500 k~1 M] [30]

Share and Cite

MDPI and ACS Style

Lee, E.H.; Seo, D.Y.; Chung, Y.W. An Efficient Routing Protocol Using the History of Delivery Predictability in Opportunistic Networks. Appl. Sci. 2018, 8, 2215. https://doi.org/10.3390/app8112215

AMA Style

Lee EH, Seo DY, Chung YW. An Efficient Routing Protocol Using the History of Delivery Predictability in Opportunistic Networks. Applied Sciences. 2018; 8(11):2215. https://doi.org/10.3390/app8112215

Chicago/Turabian Style

Lee, Eun Hak, Dong Yeong Seo, and Yun Won Chung. 2018. "An Efficient Routing Protocol Using the History of Delivery Predictability in Opportunistic Networks" Applied Sciences 8, no. 11: 2215. https://doi.org/10.3390/app8112215

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop