Next Article in Journal
Deconstructing the Parent–Child Relationship during the COVID-19 Pandemic through Tech-Wise Outlets Such as the Internet and Media Consumption
Previous Article in Journal
Research on the Comparative Advantage and Complementarity of China–Ghana Agricultural Product Trade
Previous Article in Special Issue
A Generalized Approach on Outage Performance Analysis of Dual-Hop Decode and Forward Relaying for 5G and beyond Scenarios
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

An Energy Efficient Local Popularity Based Cooperative Caching for Mobile Information Centric Networks

1
Department of Computer Systems Engineering, University of Engineering & Applied Sciences, Swat 19201, Pakistan
2
Department of Electrical Engineering, Sarhad University of Science Information Technology, Peshawar 25000, Pakistan
3
Department of Electrical and Computer Engineering, COMSATS University Islamabad, Abbottabad Campus, Abbottabad 22010, Pakistan
4
Ocean College, Zhejiang University, Zhoushan 316021, China
5
Department of Electrical Engineering, College of Electronics and Information Engineering, Sejong University, Seoul 05006, Korea
*
Authors to whom correspondence should be addressed.
Sustainability 2022, 14(20), 13135; https://doi.org/10.3390/su142013135
Submission received: 30 August 2022 / Revised: 8 October 2022 / Accepted: 10 October 2022 / Published: 13 October 2022

Abstract

:
The usage of social media applications such as Youtube, Facebook, and other applications is rapidly increasing with each passing day. These applications are used for uploading informational content such as images, videos, and voices, which results in exponential traffic overhead. Due to these overheads (high bandwidth consumption), the service providers fail to provide high-speed and low latency internet to users around the globe. The current internet cannot cope with such high data traffic due to its fixed infrastructure (host centric-network) degrading the network performance. A new internet paradigm known as Information Centric Networks (ICN) was introduced based on content-oriented addressing. The concept of ICN is to entertain the request locally by neighbor nodes without accessing the source, which will help offload the network’s data traffic. ICN can mitigate traffic overhead and meet future Internet requirements. In this work, we propose a novel decentralized placement scheme named self-organized cooperative caching ( S O C C ) for mobile ICN to improve the overall network performance through efficient bandwidth utilization and less traffic overhead. The proposed scheme outperforms the state-of-the-art schemes in terms of energy consumption by 55%, average latency, and cache hit rate by a minimum of 35%.

1. Introduction

The emerging technologies in mobile networks give consumers and service providers facilities. The data traffic increase due to these technologies is surging mostly because of social applications [1]. This exponential data growth is placing a significant burden on the current wireless infrastructure [2,3]. According to the Cisco report [4], the internet users are nearly 4.5 billion, and it is predicted to be incremented to approximately 5.3 billion at the end of the year 2023. The handheld devices are equipped with new technologies (Wi-Fi direct, Near Field Communication, Bluetooth, etc., supported by 3GPP) with high storage capacity and fast processors. Due to recent layer-2 technology, a concept known as device-to-device (D2D) communication has been introduced that allows mobile devices to share data with nearby devices without using any core services. This D2D communication facilitates the mobile devices to connect with several other mobile devices in a multi-hop communication manner. These latest emerging technologies promote the technique of cooperative caching. ICN is a new paradigm where a cooperative technique is used to offload the traffic overhead of the network. It follows the content-based addressing technique, where the contents are placed close to the end-users to satisfy the request from its neighbouring user by minimizing the need for a source. In this way, the traffic overhead can be minimized on the core network with low latency and benefit service providers by reducing the wastage of bandwidth [5,6]. ICN is a futuristic approach that meets the future internet requirement and cellular network [7]. It provides an efficient energy consumption in advance wireless communication technology such as 5G [8]. Multiple schemes such as [9], and [10] have worked to optimize the overall performance of the ICN network by considering various parameters such as low latency, energy efficiency, high cache hit ratio, packet delivery, mitigating the content granularity, etc. Table 1 shows the abbreviations of some important keywords.
The focus of this paper is to minimize energy consumption, lower latency, and improve the cache hit rate. In order to do so, this paper proposed a decentralized cooperative caching scheme for mobile ICN.
The main contributions of this work are:
  • We establish a cooperating caching model for wireless ICN to minimize energy consumption, considering some significant constraints, such as limited storage capacity, content popularity, access to content, and placement of contents.
  • The proposed scheme can support device-to-device (D2D) communication and minimizes transportation and energy costs.
  • The proposed system improves the overall performance of the network in terms of efficient bandwidth utilization, low latency, and low energy consumption.
The rest of the paper is organized as follows. Section 2 provides the related work. In Section 3, we explain the methodology. In Section 4, we present the Network and Energy consumption models. In Section 5, we present the results and discussion. Finally, in Section 6, we draw our conclusions and present the future directions.

2. Related Work

To cope with the exponential growth of data traffic, a new paradigm known as ICN is introduced. The proposed model facilitates the end-users by satisfying the request of their neighbour users. It keeps packets of the content at the network’s edge, which helps the users acquire their desired data from the edge. In [11], the authors proposed a social based QoS routing scheme for ICN. The authors devised three types of relationships between the nodes namely neighbors (NB), interest friends (IF) and response friends (RF). The author used content popularity based cache scheme to check whether the content should be cached or not in CS. A content with high probability to be requested in future is given higher priority to be placed in limited sized CS and when CS is full, the content that has low probability to be requested in future is replaced by the high probability requested content. In [12], the authors proposed a scheme known as Cache Everything (CEE), to mitigate the average hop latency by placing the content at the neighbor user. It reduces the hop count for content retrieval but also faces the problem of cache redundancy. The cache redundancy problem arises along the path of sender and provider, where each node cache the content in their respective memory buffer. Thus, all the nodes in the network cache contain the redundant data in their memory buffers. The mobile devices has limited storage capacity, thus it apply the eviction scheme whenever the memory of these devices are full. In [13], the authors proposed a deep learning based mobile network architecture that identify QoS by directly mapping the state of mobile networks.
The cache redundancy problem arises along the path of sender and provider, where each node cache the content in their respective memory buffer. To overcome the problem of data redundancy, the authors in [14] devised a probabilistic caching scheme. In this scheme, the data redundancy problem is reduced by calculating the probability of each data content that arrived at the ICN node between the requester and provider to determine whether the data needs to be cached. In [15], the authors investigated the performance of different caching schemes with various content request models. The authors in [16], used the approach known as WAVE to overcome the cache redundancy problem. In this approach, the content is divided into small chunks. These small chunks contain the data packets. The requester sends the request for every chunk to acquire a data packet. This approach reduces the data redundancy problem and packet loss as we all know that most traffic overhead occurs due to popular or most liked content. The user interest data can be placed at its network edge or nearer to the end users to reduce traffic overhead.
In [17], authors proposed an approach to calculate the user’s interest content from the network details and placed the desired contents to their network edge by using a social distance parameter. On this parameter the arrived content is cached. The authors in [18], proposed a new energy-aware scheme known to rely on the improvement the LEACH protocol to overcome energy consumption. This approach optimally determine the cluster heads (CH) to minimize the average energy consumption and extend the life span of a wireless sensor network. Similarly, the authors in [19], proposed an efficient strategy to minimize the traffic load generated by social networks, thus increasing the data rate and reducing the time of delay for content availability Furthermore, core network load is reduced as well.

3. Methodology

3.1. Overview of ICN Architecture

ICN is an emerging networking architecture that was designed to satisfy the future Internet requirements. It was first devised by Van Jacobson in [20], based on content addressing where the user acquire the desired data from its neighboring nodes. It propagates the request for further forwarding to the content source. In this way ICN improves the overall efficiency and throughput of the network. The typical Internet is IP-based in which the user access the main content originator to attain the desired data whereas ICN supports name based addressing. Figure 1 shows the illustration of ICN architecture, that consist of four nodes (A, B, C and D) connected through multi-hop. Each ICN node contains three data structures namely, pending interest table (PIT), content store (CS) and future information Base (FIB). The PIT contains the unsatisfied request for content, that arrives at the node along with the ID and face of the requested node for path indication. The the node checks the content in PIT for every unsatisfied request. If the desired data content is not available in PIT, the node allot the content to PIT and then forward the request to its neighboring nodes. Once the request is satisfied by neighboring node, the content is removed from PIT, by the requesting node. The CS is a limited buffer where the contents are stored temporarily. FIB table store the entry of satisfied requested content along with content name and its full address of request been satisfied node. FIB is similar to the IP table, managed by Internet routers. When the user desired data packet is unsatisfied by the node (means CS does not contain the requested content name in its buffer and no entry is available in PIT table), it checks the FIB entry for matching the longest prefix and mention the name of content and outgoing faces. The packets generated in the network are named as Interest Packet and Data Packet.

3.2. Proposed S O C C Caching Scheme

The ICN node architecture generally consist of three data structures namely, PIT, CS and FIB. In proposed paper a new data structure named as self organized cooperative caching ( S O C C ) table is incorporated in the node. The S O C C as shown in Figure 2 stores the requested content according to the frequency of its request, at the receiver node. The S O C C table is updated regularly, in descending order as per Algorithm 1. The notations are highlighted in Table 2.
Algorithm 1 S O C C -table Convergence
Require:n is the current node and r c is any request arrived for a content c along the path
  if (A request r c for content c arrives at the node) then
    Update the S O C C table for the required content c
    Sort S O C C -table in descending order
  end if
Algorithm 1 represents the convergence operation in local popularity table S O C C at the arrival of each request r c on a corresponding node n. As soon as the request is received, the node’s S O C C table is updated and the all its content are sorted descending order. The cache hit occurs if the request already exist at the corresponding node n. The rank of the request is updated in S O C C table and the data packet c is forwarded using reverse path to the corresponding content requested node. If the requested data are not available, then it generate the interest packet and disseminate it to the the neighboring nodes.
Suppose, if the user needs a content c, the corresponding node generates an interest packet r c and forward this packet to their neighbor nodes. At the same time, the Algorithm 1 updates the S O C C table at the current node and increment the popularity requested content by one and then sort the S O C C table according to the popularity based in descending order. On the other hand, when a request r c packet is arrived at a particular node. The node checks its cache B and if the requested content c is available, it is sent back at the same path (face) from the interest packet came. Suppose, the cache B is getting full, the node make space for the newly arrived content by removing the least popular content. Similarly, as per popularity of the arrived content the node decides whether the content is cache or not.
To better understand the Algorithm 2, suppose we have a small wireless network as shown in Figure 1, consist of N nodes (Node A, Node B, Node C, ……, Node N ). If the user node A generates an interest packet and forward to it neighbor nodes (node B, node C and node D), and the neighbor node D already exists the copy of corresponding content, so it sends back the data packet to the requester node A.
Algorithm 2 The proposed Placement scheme S O C C at the arrival of c
Require:n is the current node and c is any content arrived along the path
if ( c B n and ( Requester ( r c ) = = n ) then ▹c not available in the local cache
c not available in the local cache and node n is the requester
     if  ( B n < B )  then
▹ Insert c in B n
     else
         if  ( B n = = B )  then
▹ Replace least recently used content and insert the arrived c in B n
         end if
     end if
end if
 Forwarded to the neighbor nodes in order to reach the Requester
Upon the arrival of data packet at the node A, first of all, node A checks its own B whether arrived content is already available it discards the arrived content c, otherwise if the arrived ( c B n ) is not available then it decides whether to cache the new content c in its B . Consider the cache B of node A is full and can not cache the arrived c. In such case, to cache the arrived content c, the Algorithm 2 used the LRU replacement strategy to remove the oldest content from node A cache and makes space for new content c. The complexity of the proposed scheme is similar to that of LRU, only more storage is required for maintaining the convergence table S O C C . The flowchart of the proposed approach is shown in Figure 3.

3.3. Content Request Generation Process

The behaviour of a common user for random request generation process is generally considered to be using Zipf distribution. It is a discrete distribution technique that calculates the frequency of the requested content and generates a random request, particularly used for user generated contents [21]. The Equation (1) as in [17] is used to calculate the probability distribution of each content, that represents the number of times, a data content is requested by the users in a network. In Equation (1), q k denotes corresponding content probability, γ is the normalization factor, α is the distribution parameter and k is the corresponding content.
q k = γ / K α F o r K = 1 C

3.4. Performance Metrics

The performance of proposed approach is measured by using below three major parameters:
  • Cache hit rate: As shown in Equation (2), it is the ratio of the requests satisfied by the caching node to the total number of requests.
    H i t R a t i o = H i t s ( H i t s + M i s s e d )
  • Average response: It is the average time required to forward the requested content to the requester node in a network. It is used to minimize the hop count, to decrease the response or waiting time of a request.
  • Energy saving rate: Energy saving rate represents the amount of energy saved by any caching policy in comparison to a non caching in a system.

4. System Model

Consider a random topology of ICN network, constructed as a connected graph G = ( N , E ), where N = ( n 1 , n 2 , , N n ) , U N represents the number of nodes and E N × N indicates the bidirectional links. Suppose C = ( c 1 , c 2 , , C N ) indicates the collection of contents in a network. In the start, it is assumed that the server pre-stored, all the available catalogue of the content C. The end nodes of the network are responsible to receive the user interest packets and forward it to the requester node in a network. In the constructed ICN network, each ICN node n i has the capacity to cache a M i data content in its memory buffer.

4.1. Linear Topology

In linear topology, a linear wireless network of N ICN nodes is considered. As shown in Figure 4, the nodes are connected to the server in a multi-hop manner. The server (access point) is assumed to be equipped with content catalog C. The buffers B of all the nodes are considered to be of same size. The nodes act both as a content requester and provider (relaying nodes).

4.2. Mobile Topology

As mobility act as a double edge sword in networking. It can enhance the network performance by bringing the requested contents closer to the requester node, thus, decreasing the average latency. It also has an adverse effect on the system performance which means if a specific node moves away, the requester node starts searching for it. In this work, a wireless mobile network consists of N mobile ICN nodes interconnected to each other and deployed in a random position of probability-p as shown in Figure 5. The server is equipped with content catalogue C is placed at the centre and is connected with nodes in a multi-hop manner.

4.3. An Energy Consumption Model

The energy consumption model in [20], is used to calculate the total energy consumption of ICN network. Total energy consumption E t o t of the network is a sum of energy consumed while transmitting E t and caching the contents E c as shown in Equation (3).
E t o t = E t + E c
Since, here, we only deal with the data packets instead of chunks thus, for any node n i , it is assumed that s k = 1 . The request rate for the content c k at node n i is q i k within time interval t. Each node consumes ω c caching power density in its buffer with ω l energy density of the link. According to energy model in [20], the caching energy E c at each individual node n i , is consumed to cache the content at time t as shown in Equation (4).
E c = ω c t
Let us suppose, a node n j requested a content from another node n i . The distance between these nodes is h i j , which is calculated in terms of hops. The set of nodes along the path is h i j + 1 . The total energy consumed by the nodes and the links for handling and transmitting the request is q i k ( h i j + 1 ) and q i k ω l h i j , respectively. The transmission energy E t consumed by the nodes and the communication links while transmitting the content c k from node n i to node n j is calculated using Equation (5).
E t = q i k [ h i j ( ω n + ω l ) + ω n ]
As shown in Equation (6) the total energy consumed, is calculated by using the values of E t and E c from Equations (4) and (5), respectively.
E t o t = ω c t + q i k [ h i j ( ω r + ω l ) + ω n ]
The total server energy consumption E t o t can be calculated using Equation (7).
E t o t = q i k [ h s j ( ω r + ω l ) + ω n ]

4.4. Simulation Environment

The simulations were performed in OMNET++ [22] by considering two different scenarios: linear and mobile. It is assumed, that the popularity content follows the Zipf distribution. As per Poisson process, the users generate 1 requests per 300 s for the interested content. The wireless communication in ICN node is enabled by using wifi-direct (IEEE 802.11 standard). The Random Walk mobility model is used for mobile nodes, as it randomly changes its position in a defined area, with random speed and direction at a time interval t. Each node can transmit data with constant bit rate (CBR), with a maximum transmission capacity of 30 m. For each communication link between nodes, the bandwidth is 2 Mbps with 10 dB noise in channel and 0.33 s propagation delay. The nodes can have the size of two contention windows and it can transmit up to 250 kbps. Assume that each ICN node consumes 1 joule energy for bi-directional communication.
For linear scenario, a fix storage size of node B = 25 is assumed and for mobile scenario, a varying cache size from 10 MB to 500 MB is considered for each node. Initially, the cache nodes are empty and are of same size. The simulation are measured when the network is in steady state and the caches are full with the content. The simulations are run 10 times for each caching scheme and here the results are reported as average. The selection of requester node and content from the catalog ( c 1 , c 2 , , C N ) follows uniform distribution and Zipf distribution, respectively. The edge nodes may cover a large number of users, hence the average request rate λ at edge nodes follows a uniform distribution. The parameters used are given in Table 3.

5. Results and Discussions

5.1. Impact on Average Latency

Figure 6 shows the performance of the proposed approach and other state-of-the-art approaches in a linear wireless network. The size of the content catalogue is 100 and the zipf distribution parameter of α = 1.3 . The cache size of each ICN node is assumed to be B = 25 and The buffer size of B = 25, enables each to store the top 25 most interested content in its buffer.
As the distance between a node and the server increases, the latency increases while the request satisfaction decreases and vice versa. Similar is the case of hit ratio and throughput. The proposed caching scheme S O C C performed better than the others by a minimum of 1.3 hops. The NO-CACHE scheme performed worst compared to other approaches and is considered our reference point. The Cache Everything (CEE) and Random Caching p = 0.9 got the same results but failed to perform better than our proposed approach.
In the mobile scenario, we assume that the server is equipped with a catalogue of 1000 contents and a Zipf popularity parameter of α = 0.8 . The cache size of all the nodes is kept the same, varying from 10 MB to 500 MB. In Figure 7, as the cache size increases, the possibility of satisfying users’ requests increases, thus, decreasing the average latency for all the approaches. The greater the caching capacity, the greater the chance to cache the content; hence, the latency that depends on the number of hops will be smaller. For smaller cache sizes, such as 50 MB, the proposed approach achieved a minimum average latency of 3.75 hops compared to CEE, Random caching p = 0.9 and 0.5, which achieved average latency of 4.00, 4.5 and 5.4 hops, respectively. As the cache size increases to 100 MB, the average latency for the proposed scheme decreases to 3.29 hops, while for CE, Random caching p = 0.9 and 0.5 decreases to 3.4, 4.00 and 5.1, respectively. Figure 7 shows that at a cache size of 500 MB, our proposed approach outperformed other state-of-the-art approaches by achieving a minimum average latency of 2.0 hops.

5.2. Impact on Cache Hit Ratio

Figure 8 shows that as the cache size increases from 10 MB to 500 MB, the cache hit ratio also increases. For smaller cache sizes such as 10 MB, the node has limited storage for caching the content; thus, the performance of all the approaches are enhanced slightly. CEE follows a strategy of caching non-popular content; hence it achieved a smaller hit ratio of 10% while Random Capacity p = 0.9 achieved a 26% cache hit ratio, which is 16% better than CEE. The better performance of Random Capacity p = 0.9 is its capability of caching the popular contents with a probability of 0.5. As the cache size increases from 10 MB to 210 MB, the storage capacity of the node increases; hence the overall performance of all the approaches also increases. In Figure 8 it can be observed that, at cache size of 210 MB, the cache hit ration of CE, p = 0.9, p = 0.5 and S O C C increases to 35%, 45%, 53% and 61%, respectively. At a cache size of 500 MB, S O C C outperformed other schemes by achieving the highest hit ratio of 65% while CEE, p = 0.9, p = 0.5 achieved hit ratios of 25%, 15% and 6%, respectively.

5.3. Impact on Energy Consumption

In a wireless communication network, energy consumption is high due to the movement of devices from one position to another. Here, the energy is consumed during caching E c and transmitting the content E t . To minimize the overall energy consumption, the Transport energy E t needs to be minimized, as most of the energy is consumed during the transmission of content. Efficient caching will reduce the average latency and minimize the transportation energy.
The Figure 9 represents the relationship of energy saving ratio with cache size. It is computed for CEE, Random Caching p = 0.9, Random Caching p = 0.5, and the Proposed caching scheme. At 10 MB cache size, CEE saves only 20% energy, which is worst, while the Random Caching p = 0.9 performed quite better due to its probability strategy. The proposed scheme S O C C save high energy of 31.53%. At 210 MB of cache size, CEE, p = 0.9, p = 0.5 and S O C C achieve 19.2%, 38.4%, 46.78% and 52.32% energy saving, respectively. At 500 MB, CEE, p = 0.9 and p = 0.5 achieved energy saving of 26.11%, 15.97% and 5.91%, respectively while S O C C , achieved the highest energy saving of 55.45%. All of these experimental results validate that the S O C C scheme performs efficiently with the maximum storage capacity of the node.

6. Conclusions and Future Works

The current Internet infrastructure is unable to cope with this huge data growth by social applications and ultimately degrades the overall performance of both the service provider and requester. Cooperative caching helps to reduce the load of the network and utilize bandwidth efficiently.
In this paper, we proposed a novel placement scheme based on the local popularity of the contents. The scheme maintains a real-time ranking table in which the contents are placed based on their popularity in the network. No extra communication overheads are needed to maintain the ranking table. Due to this effective cooperative caching, the network’s performance is enhanced compared to other caching schemes. The proposed approach was implemented in OMNET++, which outperformed other state-of-the-art approaches in terms of average latency (by 2-hops), average hit ratio (by 35%), and average energy saving (by 55.45%). In the future, we will evaluate the scalability and efficiency of the proposed system by using different wireless network topologies and other performance parameters.

Author Contributions

Conceptualization, J.I.; Data curation, A.R.; Formal analysis, Z.u.A. and A.Z.; Funding acquisition, S.A.H.M. and M.H.A.; Investigation, N.A. and M.H.A.; Methodology, J.I., N.A., S.H.K. and A.Z.; Project administration, S.A.H.M.; Resources, A.R.; Software, J.I., S.H.K. and A.R.; Supervision, M.H.A.; Validation, S.H.K. and S.A.H.M.; Writing—original draft, Z.u.A., N.A. and A.Z.; Writing—review & editing, S.A.H.M. and M.H.A. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Ye, X.; Chen, M. Personalized Recommendation for Mobile Internet Wealth Management Based on User Behavior Data Analysis. Sci. Program. 2021, 2021, 9326932. [Google Scholar] [CrossRef]
  2. Iqbal, J.; Iqbal, M.A.; Ahmad, A.; Khan, M.; Qamar, A.; Han, K. Comparison of Spectral Efficiency Techniques in Device-to-Device Communication for 5G. IEEE Access 2019, 7, 57440–57449. [Google Scholar] [CrossRef]
  3. Mollahasani, S.; Eroğlu, A.; Demirkol, I.; Onur, E. Density-aware mobile networks: Opportunities and challenges. Comput. Netw. 2020, 175, 107271. [Google Scholar] [CrossRef]
  4. Cisco Annual Internet Report (2018–2023), Tech. Rep., 2020. Available online: https://www.cisco.com/c/en/us/solutions/collateral/executive-perspectives/annual-internet-report/white-paper-c11-741490.html (accessed on 5 August 2022).
  5. Siddiqui, M.U.A.; Qamar, F.; Tayyab, M.; Hindia, M.N.; Nguyen, Q.N.; Hassan, R. Mobility Management Issues and Solutions in 5G-and-Beyond Networks: A Comprehensive Review. Electronics 2022, 11, 1366. [Google Scholar] [CrossRef]
  6. Tran, T.X.; Pompili, D. Adaptive Bitrate Video Caching and Processing in Mobile-Edge Computing Networks. IEEE Trans. Mob. Comput. 2019, 18, 1965–1978. [Google Scholar] [CrossRef]
  7. Gupta, M.; Garg, A. A Perusal of Replication in Content Delivery Network. In Next-Generation Networks; Lobiyal, D.K., Mansotra, V., Singh, U., Eds.; Springer: Singapore, 2018; pp. 341–349. [Google Scholar] [CrossRef]
  8. Khanh, Q.V.; Hoai, N.V.; Manh, L.D.; Le, A.N.; Jeon, G. Wireless Communication Technologies for IoT in 5G: Vision, Applications, and Challenges. Wirel. Commun. Mob. Comput. 2022, 2022, 12. [Google Scholar] [CrossRef]
  9. Cao, X.; Liu, L.; Cheng, Y.; Shen, X. Towards Energy-Efficient Wireless Networking in the Big Data Era: A Survey. IEEE Commun. Surv. Tutorials 2018, 20, 303–332. [Google Scholar] [CrossRef]
  10. Soleimani, S.; Tao, X. Caching and placement for in-network caching in device-to-device communications. Wirel. Commun. Mob. Comput. 2018, 2018, 9539502. [Google Scholar] [CrossRef] [Green Version]
  11. Qu, D.; Wang, X.; Huang, M.; Li, K.; Das, S.K.; Wu, S. A Cache-Aware Social-Based QoS Routing Scheme in Information Centric Networks. J. Netw. Comput. Appl. 2018, 121, 20–32. [Google Scholar] [CrossRef]
  12. Sourlas, V.; Flegkas, P.; Tassiulas, L. Cache-aware routing in Information-Centric Networks. In Proceedings of the 2013 IFIP/IEEE International Symposium on Integrated Network Management (IM 2013), Ghent, Belgium, 27–31 May 2013; pp. 582–588. [Google Scholar]
  13. Luo, G.; Yuan, Q.; Li, J.; Wang, S.; Yang, F. Artificial Intelligence Powered Mobile Networks: From Cognition to Decision. IEEE Netw. 2022, 36, 136–144. [Google Scholar] [CrossRef]
  14. Psaras, I.; Chai, W.K.; Pavlou, G. Probabilistic In-network Caching for Information-centric Networks. In Proceedings of the Second Edition of the ICN Workshop on Information-Centric Networking, New York, NY, USA, 17 August 2012; ICN ’12. ACM: New York, NY, USA, 2012; pp. 55–60. [Google Scholar] [CrossRef] [Green Version]
  15. Iqbal, J.; Giaccone, P.; Rossi, C. Local cooperative caching policies in multi-hop D2D networks. In Proceedings of the 2014 IEEE 10th International Conference on Wireless and Mobile Computing, Networking and Communications (WiMob), Larnaca, Cyprus, 8–10 October 2014; pp. 245–250. [Google Scholar] [CrossRef] [Green Version]
  16. Cho, K.; Lee, M.; Park, K.; Kwon, T.T.; Choi, Y.; Pack, S. WAVE: Popularity-based and collaborative in-network caching for content-oriented networks. In Proceedings of the 2012 Proceedings IEEE INFOCOM Workshops, Orlando, FL, USA, 25–30 March 2012; pp. 316–321. [Google Scholar] [CrossRef]
  17. Iqbal, J.; Giaccone, P. Interest-based cooperative caching in multi-hop wireless networks. In Proceedings of the 2013 IEEE Globecom Workshops (GC Wkshps), Atlanta, GA, USA, 9–13 December 2013; pp. 617–622. [Google Scholar] [CrossRef] [Green Version]
  18. Zhang, S.; Li, J.; Yang, Q.; Qin, M.; Kwak, K.S. Residual-energy Aware LEACH Approach for Wireless Sensor Networks. In Proceedings of the 2019 Eleventh International Conference on Ubiquitous and Future Networks (ICUFN), Zagreb, Croatia, 2–5 July 2019; pp. 413–418. [Google Scholar] [CrossRef]
  19. Wang, T.; Li, P.; Wang, X.; Wang, Y.; Guo, T.; Cao, Y. A Comprehensive Survey on Mobile Data Offloading in Heterogeneous Network. Wirel. Netw. 2019, 25, 573–584. [Google Scholar] [CrossRef]
  20. Jacobson, V.; Smetters, D.K.; Thornton, J.D.; Plass, M.F.; Briggs, N.H.; Braynard, R.L. Networking named content. In Proceedings of the 5th International Conference on Emerging Networking Experiments and Technologies, Rome, Italy, 1–4 December 2009; pp. 1–12. [Google Scholar] [CrossRef]
  21. Adamic, L.A.; Huberman, B.A. Zipf’s law and the Internet. Glottometrics 2002, 3, 143–150. [Google Scholar]
  22. Varga, A. OMNeT++. In Modeling and Tools for Network Simulation; Springer: Berlin/Heidelberg, Germany, 2010; pp. 35–59. [Google Scholar] [CrossRef]
Figure 1. ICN Architecture: A network of four routers (A, B, C and D) and multiple mobile nodes.
Figure 1. ICN Architecture: A network of four routers (A, B, C and D) and multiple mobile nodes.
Sustainability 14 13135 g001
Figure 2. Proposed ICN Node.
Figure 2. Proposed ICN Node.
Sustainability 14 13135 g002
Figure 3. Flow Chart of the proposed cooperative caching scheme.
Figure 3. Flow Chart of the proposed cooperative caching scheme.
Sustainability 14 13135 g003
Figure 4. A Linear Network with one server and N nodes. The server is equipped with content catalogue C is placed at the centre and connected with nodes in a multi-hop manner. The request packet r c travel to the left and the content ctravel towards the right.
Figure 4. A Linear Network with one server and N nodes. The server is equipped with content catalogue C is placed at the centre and connected with nodes in a multi-hop manner. The request packet r c travel to the left and the content ctravel towards the right.
Sustainability 14 13135 g004
Figure 5. A mobile network consists of one server and N ICN mobile nodes and deployed in a random position of probability-p as shown in Figure 5. The server is equipped with content catalogue C is placed at the centre and connected with nodes in a multi-hop manner.
Figure 5. A mobile network consists of one server and N ICN mobile nodes and deployed in a random position of probability-p as shown in Figure 5. The server is equipped with content catalogue C is placed at the centre and connected with nodes in a multi-hop manner.
Sustainability 14 13135 g005
Figure 6. Average Latency for Small Buffer size = 25.
Figure 6. Average Latency for Small Buffer size = 25.
Sustainability 14 13135 g006
Figure 7. Average latency vs. Cache size.
Figure 7. Average latency vs. Cache size.
Sustainability 14 13135 g007
Figure 8. Cache hit rate vs. Cache size.
Figure 8. Cache hit rate vs. Cache size.
Sustainability 14 13135 g008
Figure 9. Energy saving rate vs. Cache size.
Figure 9. Energy saving rate vs. Cache size.
Sustainability 14 13135 g009
Table 1. Abbreviations of key words.
Table 1. Abbreviations of key words.
KeywordsAbbreviations
D2DDevice-to-Device
NFCNear Field Communication
ICNInformation centric network
UGCUser generated content
PITPending information Base
CSContent store
NodeCommunication Device (fixed or mobile)
FIBFuture information Base
S O C C Self organized cooperative caching
CBRConstant bit rate
Table 2. Abbreviations of key parameters.
Table 2. Abbreviations of key parameters.
SymbolsNotations
CTotal content Catalog
N Total network nodes/user
B Buffer Size
r c Request packet by any node n for content c
cRequested content
s k Size of the content object c k
q i k Request rate for c k at node n i
ω c power density of caching in content router ( W / bit )
ω n Energy density of a node ( J / bit )
ω l Energy density of a link
h i j Distance in hops b/w n i and n j
E t energy required for transportation of packets
E c energy consumption needed for caching (storing) of the contents
Table 3. Simulation parameters.
Table 3. Simulation parameters.
Major ParametersDefaultRange
Number of contents1000100∼5000
Cache size of node (MB)1005∼500
User request patternZipf: α = 0.8 0.6∼1.2
Data packet size10 MB10 MB∼30 MB
Number of Nodes N 10
Buffer size B 25
ω c ( W / bit ) 1 × 10 9
ω r ( J / bit ) 2 × 10 8
ω l ( J / bit ) 1.5 × 10 9
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Iqbal, J.; Abideen, Z.u.; Ali, N.; Khan, S.H.; Rahim, A.; Zahir, A.; Mohsan, S.A.H.; Alsharif, M.H. An Energy Efficient Local Popularity Based Cooperative Caching for Mobile Information Centric Networks. Sustainability 2022, 14, 13135. https://doi.org/10.3390/su142013135

AMA Style

Iqbal J, Abideen Zu, Ali N, Khan SH, Rahim A, Zahir A, Mohsan SAH, Alsharif MH. An Energy Efficient Local Popularity Based Cooperative Caching for Mobile Information Centric Networks. Sustainability. 2022; 14(20):13135. https://doi.org/10.3390/su142013135

Chicago/Turabian Style

Iqbal, Javed, Zain ul Abideen, Nadia Ali, Saddam Hussain Khan, Azizur Rahim, Ali Zahir, Syed Agha Hassnain Mohsan, and Mohammed H. Alsharif. 2022. "An Energy Efficient Local Popularity Based Cooperative Caching for Mobile Information Centric Networks" Sustainability 14, no. 20: 13135. https://doi.org/10.3390/su142013135

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop