Distributed Computer and Communication Networks

A special issue of Mathematics (ISSN 2227-7390). This special issue belongs to the section "Network Science".

Deadline for manuscript submissions: closed (31 October 2022) | Viewed by 31574

Special Issue Editor

Department of Information and Networking Technologies, Institute of Control Sciences of Russian Academy of Sciences, 65 Profsoyuznaya Street, 119991 Moscow, Russia
Interests: computer systems and networks; queuing systems; telecommunications; discrete mathematics (extremal graph theory, mathematical programming); wireless data transmission networks
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

Computer networks are currently one of the main development trends of the telecommunications industry. The rapid change in network technologies aimed at providing users with new services and increasing the speed and quality of transmission and processing of information has ensured the proliferation of computer networks in all spheres of human activity: economy, science, culture, education, industry. New network technologies, meeting the demands of the modern information society, open up new opportunities and new fields of application such as the Internet of things (IoT), tactile internet, ultra-reliable low latency communication (URLLC), Augmented, Virtual, and Mixed Reality applications, etc. However, the rapid development of network technologies and the widespread implementation of computer networks would be impossible without the use and development of methods and algorithms of mathematical modeling of network technologies for comparative analysis, performance evaluation, and selection of optimal parameters of computer networks. Improving the performance of the existing networks and designing promising 5G/ 6G networks requires the integrated use and development of methods and approaches from various fields of science, including: queuing theory and reliability theory; optimal control theory and mathematical programming; theoretical foundations of electrical engineering and mathematical physics; and extremal graph theory, stochastic processes, and machine learning.

This Special Issue is devoted to the development of new mathematical models, methods, and algorithms from the fields of science listed above, aimed at researching both terrestrial all-optical networks and new-generation broadband wireless networks, including promising UAV-based networks.

Prof. Dr. Vladimir M. Vishnevsky
Guest Editor

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Mathematics is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • Analytical modeling of distributed systems and networks
  • Next-generation 5G/6G networks
  • QoS/QoE evaluation and network efficiency
  • Computer and telecommunication network control and management
  • Mathematical methods in queueing theory/reliability theory
  • Internet of Things and fog computing
  • Stochastic processes and machine learning
  • UAV-based wireless networks

Published Papers (17 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Jump to: Review

19 pages, 2914 KiB  
Article
Resource Allocation in V2X Communications Based on Multi-Agent Reinforcement Learning with Attention Mechanism
by Yuanfeng Ding, Yan Huang, Li Tang, Xizhong Qin and Zhenhong Jia
Mathematics 2022, 10(19), 3415; https://doi.org/10.3390/math10193415 - 20 Sep 2022
Cited by 6 | Viewed by 2118
Abstract
In this paper, we study the joint optimization problem of the spectrum and power allocation for multiple vehicle-to-infrastructure (V2I) and vehicle-to-vehicle (V2V) users in cellular vehicle-to-everything (C-V2X) communication, aiming to maximize the sum rate of V2I links while satisfying the low latency requirements [...] Read more.
In this paper, we study the joint optimization problem of the spectrum and power allocation for multiple vehicle-to-infrastructure (V2I) and vehicle-to-vehicle (V2V) users in cellular vehicle-to-everything (C-V2X) communication, aiming to maximize the sum rate of V2I links while satisfying the low latency requirements of V2V links. However, channel state information (CSI) is hard to obtain accurately due to the mobility of vehicles. In addition, the effective sensing of state information among vehicles becomes difficult in an environment with complex and diverse information, which is detrimental to vehicles collaborating for resource allocation. Thus, we propose a framework of multi-agent deep reinforcement learning based on attention mechanism (AMARL) to improve the V2X communication performance. Specifically, for vehicle mobility, we model the problem as a multi-agent reinforcement learning process, where each V2V link is regarded an agent and all agents jointly intercommunicate with the environment. Each agent allocates spectrum and power through its deep Q network (DQN). To enhance effective intercommunication and the sense of collaboration among vehicles, we introduce an attention mechanism to focus on more relevant information, which in turn reduces the signaling overhead and optimizes their communication performance more explicitly. Experimental results show that the proposed AMARL-based approach can satisfy the requirements of a high rate for V2I links and low latency for V2V links. It also has an excellent adaptability to environmental change. Full article
(This article belongs to the Special Issue Distributed Computer and Communication Networks)
Show Figures

Figure 1

14 pages, 3140 KiB  
Article
Remote Geotechnical Monitoring of a Buried Oil Pipeline
by Alla Yu. Vladova
Mathematics 2022, 10(11), 1813; https://doi.org/10.3390/math10111813 - 25 May 2022
Cited by 5 | Viewed by 1554
Abstract
Extensive but remote oil and gas fields in Canada and Russia require extremely long pipelines. Global warming and local anthropogenic effects drive the deepening of seasonal thawing of cryolithozone soils and enhance pathological processes such as frost heave, thermokarst, and thermal erosion. These [...] Read more.
Extensive but remote oil and gas fields in Canada and Russia require extremely long pipelines. Global warming and local anthropogenic effects drive the deepening of seasonal thawing of cryolithozone soils and enhance pathological processes such as frost heave, thermokarst, and thermal erosion. These processes lead to a reduction in the subgrade capacity of the soils, causing changes in the spatial position of the pipelines, consequently increasing the number of accidents. Oil operators are compelled to monitor the daily temperatures of unevenly heated soils along pipeline routes. However, they are confronted with the problem of separating anthropogenic heat losses from seasonal temperature fluctuations. To highlight heat losses, we propose a short-term prediction approach to a transformed multidimensional dataset. First, we define the temperature intervals according to the classification of permafrost to generate additional features that sharpen seasonal and permafrost conditions, as well as the timing of temperature measurement. Furthermore, linear and nonlinear uncorrelated features are extracted and scaled. The second step consists of selecting a training sample, learning, and adjusting the additive regression model. Forecasts are then made from the test sample to assess the accuracy of the model. The forecasting procedure is provided by the three-component model named Prophet. Prophet fits linear and nonlinear functions to define the trend component and Fourier series to define the seasonal component; the third component, responsible for the abnormal days (when the heating regime is changed for some reason), could be defined by an analyst. Preliminary statistical analysis shows that the subsurface frozen soils containing the oil pipeline are mostly unstable, especially in the autumn season. Based upon the values of the error metrics, it is determined that the most accurate forecast is obtained on a three-month uniform time grid. Full article
(This article belongs to the Special Issue Distributed Computer and Communication Networks)
Show Figures

Figure 1

26 pages, 353 KiB  
Article
Queueing-Inventory with One Essential and m Optional Items with Environment Change Process Forming Correlated Renewal Process (MEP)
by Jaison Jacob, Dhanya Shajin, Achyutha Krishnamoorthy, Vladimir Vishnevsky and Dmitry Kozyrev
Mathematics 2022, 10(1), 104; https://doi.org/10.3390/math10010104 - 29 Dec 2021
Cited by 6 | Viewed by 1264
Abstract
We consider a queueing inventory with one essential and m optional items for sale. The system evolves in environments that change randomly. There are n environments that appear in a random fashion governed by a Marked Markovian Environment change process. Customers demand the [...] Read more.
We consider a queueing inventory with one essential and m optional items for sale. The system evolves in environments that change randomly. There are n environments that appear in a random fashion governed by a Marked Markovian Environment change process. Customers demand the main item plus none, one, or more of the optional items, but were restricted to at most one unit of each optional item. Service time of the main item is phase type distributed and that of optional items have exponential distributions with parameters that depend on the type of the item, as well as the environment under consideration. If the essential item is not available, service will not be provided. The lead times of optional and main items have exponential distributions having parameters that depend on the type of the item. The condition for stability of the system is analyzed by considering a multi-dimensional continuous time Markov chain that represent the evolution of the system. Under this condition, various performance characteristics of the system are derived. In terms of these, a cost function is constructed and optimal control policies of the different types of commodities are investigated. Numerical results are provided to give a glimpse of the system performance. Full article
(This article belongs to the Special Issue Distributed Computer and Communication Networks)
19 pages, 2391 KiB  
Article
Fluid-Flow Approximation in the Analysis of Vast Energy-Aware Networks
by Monika Nycz, Tomasz Nycz and Tadeusz Czachórski
Mathematics 2021, 9(24), 3279; https://doi.org/10.3390/math9243279 - 16 Dec 2021
Cited by 1 | Viewed by 1777
Abstract
The paper addresses two issues: (i) modeling dynamic flows transmitted in vast TCP/IP networks and (ii) modeling the impact of energy-saving algorithms. The approach is based on the fluid-flow approximation, which applies first-order differential equations to analyze the evolution of queues and flows. [...] Read more.
The paper addresses two issues: (i) modeling dynamic flows transmitted in vast TCP/IP networks and (ii) modeling the impact of energy-saving algorithms. The approach is based on the fluid-flow approximation, which applies first-order differential equations to analyze the evolution of queues and flows. We demonstrate that the effective implementation of this method overcomes the constraints of storing large data in numerical solutions of transient problems in vast network topologies. The model is implemented and executed directly in a database system. It can analyze transient states in topologies of more than 100,000 nodes, i.e., the size which was not considered until now. We use it to investigate the impact of an energy-saving algorithm on the performance of a vast network. We find that it reduces network congestion and save energy costs but significantly lower network throughput. Full article
(This article belongs to the Special Issue Distributed Computer and Communication Networks)
Show Figures

Figure 1

27 pages, 1266 KiB  
Article
Performance Evaluation of the Priority Multi-Server System MMAP/PH/M/N Using Machine Learning Methods
by Vladimir Vishnevsky, Valentina Klimenok, Alexander Sokolov and Andrey Larionov
Mathematics 2021, 9(24), 3236; https://doi.org/10.3390/math9243236 - 14 Dec 2021
Cited by 5 | Viewed by 1722
Abstract
In this paper, we present the results of a study of a priority multi-server queuing system with heterogeneous customers arriving according to a marked Markovian arrival process (MMAP), phase-type service times (PH), and a queue with finite capacity. Priority [...] Read more.
In this paper, we present the results of a study of a priority multi-server queuing system with heterogeneous customers arriving according to a marked Markovian arrival process (MMAP), phase-type service times (PH), and a queue with finite capacity. Priority traffic classes differ in PH distributions of the service time and the probability of joining the queue, which depends on the current length of the queue. If the queue is full, the customer does not enter the system. An analytical model has been developed and studied for a particular case of a queueing system with two priority classes. We present an algorithm for calculating stationary probabilities of the system state, loss probabilities, the average number of customers in the queue, and other performance characteristics for this particular case. For the general case with K priority classes, a new method for assessing the performance characteristics of complex priority systems has been developed, based on a combination of machine learning and simulation methods. We demonstrate the high efficiency of the new method by providing numerical examples. Full article
(This article belongs to the Special Issue Distributed Computer and Communication Networks)
Show Figures

Figure 1

24 pages, 827 KiB  
Article
Algorithmic Analysis of Finite-Source Multi-Server Heterogeneous Queueing Systems
by Dmitry Efrosinin, Natalia Stepanova and Janos Sztrik
Mathematics 2021, 9(20), 2624; https://doi.org/10.3390/math9202624 - 18 Oct 2021
Cited by 4 | Viewed by 1617
Abstract
The paper deals with a finite-source queueing system serving one class of customers and consisting of heterogeneous servers with unequal service intensities and of one common queue. The main model has a non-preemptive service when the customer can not change the server during [...] Read more.
The paper deals with a finite-source queueing system serving one class of customers and consisting of heterogeneous servers with unequal service intensities and of one common queue. The main model has a non-preemptive service when the customer can not change the server during its service time. The optimal allocation problem is formulated as a Markov-decision one. We show numerically that the optimal policy which minimizes the long-run average number of customers in the system has a threshold structure. We derive the matrix expressions for performance measures of the system and compare the main model with alternative simplified queuing systems which are analysed for the arbitrary number of servers. We observe that the preemptive heterogeneous model operating under a threshold policy is a good approximation for the main model by calculating the mean number of customers in the system. Moreover, using the preemptive and non-preemptive queueing models with the faster server first policy the lower and upper bounds are calculated for this mean value. Full article
(This article belongs to the Special Issue Distributed Computer and Communication Networks)
Show Figures

Figure 1

16 pages, 2466 KiB  
Article
Transient Behavior of the MAP/M/1/N Queuing System
by Vladimir Vishnevsky, Konstantin Vytovtov, Elizaveta Barabanova and Olga Semenova
Mathematics 2021, 9(20), 2559; https://doi.org/10.3390/math9202559 - 13 Oct 2021
Cited by 9 | Viewed by 1423
Abstract
This paper investigates the characteristics of the MAP/M/1/N queuing system in the transient mode. The matrix method for solving the Kolmogorov equations is proposed. This method makes it possible, in general, to obtain the main characteristics of the considered queuing system in a [...] Read more.
This paper investigates the characteristics of the MAP/M/1/N queuing system in the transient mode. The matrix method for solving the Kolmogorov equations is proposed. This method makes it possible, in general, to obtain the main characteristics of the considered queuing system in a non-stationary mode: the probability of losses, the time of the transient mode, the throughput, and the number of customers in the system at time t. The developed method is illustrated by numerical calculations of the characteristics of the MAP/M/1/3 system in the transient mode. Full article
(This article belongs to the Special Issue Distributed Computer and Communication Networks)
Show Figures

Figure 1

15 pages, 5266 KiB  
Article
Optimized Task Group Aggregation-Based Overflow Handling on Fog Computing Environment Using Neural Computing
by Harwant Singh Arri, Ramandeep Singh, Sudan Jha, Deepak Prashar, Gyanendra Prasad Joshi and Ill Chul Doo
Mathematics 2021, 9(19), 2522; https://doi.org/10.3390/math9192522 - 07 Oct 2021
Cited by 3 | Viewed by 1560
Abstract
It is a non-deterministic challenge on a fog computing network to schedule resources or jobs in a manner that increases device efficacy and throughput, diminishes reply period, and maintains the system well-adjusted. Using Machine Learning as a component of neural computing, we developed [...] Read more.
It is a non-deterministic challenge on a fog computing network to schedule resources or jobs in a manner that increases device efficacy and throughput, diminishes reply period, and maintains the system well-adjusted. Using Machine Learning as a component of neural computing, we developed an improved Task Group Aggregation (TGA) overflow handling system for fog computing environments. As a result of TGA usage in conjunction with an Artificial Neural Network (ANN), we may assess the model’s QoS characteristics to detect an overloaded server and then move the model’s data to virtual machines (VMs). Overloaded and underloaded virtual machines will be balanced according to parameters, such as CPU, memory, and bandwidth to control fog computing overflow concerns with the help of ANN and the machine learning concept. Additionally, the Artificial Bee Colony (ABC) algorithm, which is a neural computing system, is employed as an optimization technique to separate the services and users depending on their individual qualities. The response time and success rate were both enhanced using the newly proposed optimized ANN-based TGA algorithm. Compared to the present work’s minimal reaction time, the total improvement in average success rate is about 3.6189 percent, and Resource Scheduling Efficiency has improved by 3.9832 percent. In terms of virtual machine efficiency for resource scheduling, average success rate, average task completion success rate, and virtual machine response time are improved. The proposed TGA-based overflow handling on a fog computing domain enhances response time compared to the current approaches. Fog computing, for example, demonstrates how artificial intelligence-based systems can be made more efficient. Full article
(This article belongs to the Special Issue Distributed Computer and Communication Networks)
Show Figures

Figure 1

11 pages, 302 KiB  
Article
Mathematical Problems of Managing the Risks of Complex Systems under Targeted Attacks with Known Structures
by Alexander Shiroky and Andrey Kalashnikov
Mathematics 2021, 9(19), 2468; https://doi.org/10.3390/math9192468 - 03 Oct 2021
Cited by 2 | Viewed by 1137
Abstract
This paper deals with the problem of managing the risks of complex systems under targeted attacks. It is usually solved by using Defender–Attacker models or similar ones. However, such models do not consider the influence of the defending system structure on the expected [...] Read more.
This paper deals with the problem of managing the risks of complex systems under targeted attacks. It is usually solved by using Defender–Attacker models or similar ones. However, such models do not consider the influence of the defending system structure on the expected attack outcome. Our goal was to study how the structure of an abstract system affects its integral risk. To achieve this, we considered a situation where the Defender knows the structure of the expected attack and can arrange the elements to achieve a minimum of integral risk. In this paper, we consider a particular case of a simple chain attack structure. We generalized the concept of a local risk function to account for structural effects and found an ordering criterion that ensures the optimal placement of the defending system’s elements inside a given simple chain structure. The obtained result is the first step to formulate the principles of optimally placing system elements within an arbitrarily complex network. Knowledge of these principles, in turn, will allow solving the problems of optimal allocation of resources to minimize the risks of a complex system, considering its structure. Full article
(This article belongs to the Special Issue Distributed Computer and Communication Networks)
17 pages, 1778 KiB  
Article
Analysis of MAP/PH/1 Queueing System with Degrading Service Rate and Phase Type Vacation
by Alka Choudhary, Srinivas R. Chakravarthy and Dinesh C. Sharma
Mathematics 2021, 9(19), 2387; https://doi.org/10.3390/math9192387 - 25 Sep 2021
Cited by 7 | Viewed by 1765
Abstract
Degradation of services arises in practice due to a variety of reasons including wear-and-tear of machinery and fatigue. In this paper, we look at MAP/PH/1-type queueing models in which degradation is introduced. There are several [...] Read more.
Degradation of services arises in practice due to a variety of reasons including wear-and-tear of machinery and fatigue. In this paper, we look at MAP/PH/1-type queueing models in which degradation is introduced. There are several ways to incorporate degradation into a service system. Here, we model the degradation in the form of the service rate declining (i.e., the service rate decreases with the number of services offered) until the degradation is addressed. The service rate is reset to the original rate either after a fixed number of services is offered or when the server becomes idle. We look at two models. In the first, we assume that the degradation is instantaneously fixed, and in the second model, there is a random time that is needed to address the degradation issue. These models are analyzed in steady state using the classical matrix-analytic methods. Illustrative numerical examples are provided. Comparisons of both the models are drawn. Full article
(This article belongs to the Special Issue Distributed Computer and Communication Networks)
Show Figures

Figure 1

16 pages, 2553 KiB  
Article
Non-Statistical Method for Validation the Time Characteristics of Digital Control Systems with a Cyclic Processing Algorithm
by Vitaly Promyslov and Kirill Semenkov
Mathematics 2021, 9(15), 1732; https://doi.org/10.3390/math9151732 - 22 Jul 2021
Cited by 5 | Viewed by 1399
Abstract
The paper discusses the problem of performance and timing parameters with respect to the validation of digital instrumentation and control systems (I&C). Statistical methods often implicitly assume that the probability distribution law of the estimated parameters is close to normal. Thus, the confidence [...] Read more.
The paper discusses the problem of performance and timing parameters with respect to the validation of digital instrumentation and control systems (I&C). Statistical methods often implicitly assume that the probability distribution law of the estimated parameters is close to normal. Thus, the confidence intervals for the parameter are determined on the grounds of this assumption. However, we encountered cases when the delay distribution law in I&C is not normal. In these cases, we used the non-statistical network calculus method for time parameters estimation. The network calculus method is well elaborated for lossless digital system models with seamless processing algorithm depending only on data volume. We consider the extension of the method to the case of I&C systems with considerable changes in the data flow and content-dependent processing disciplines. The model is restricted to systems with cyclic processing algorithms and fast network connections. Network calculus describes the data flow and system parameters in terms of flow envelopes and service curves that are generally unknown in advance. In this paper, we define equations that allow the calculation of these characteristics from experimental data. The correspondence of the Network Calculus and classical statistical estimation methods is discussed. Additionally, we give an example of model application to a real I&C system. Full article
(This article belongs to the Special Issue Distributed Computer and Communication Networks)
Show Figures

Figure 1

34 pages, 714 KiB  
Article
Single-Threshold Model Resource Network and Its Double-Threshold Modifications
by Liudmila Zhilyakova
Mathematics 2021, 9(12), 1444; https://doi.org/10.3390/math9121444 - 21 Jun 2021
Cited by 2 | Viewed by 1709
Abstract
A resource network is a non-classical flow model where the infinitely divisible resource is iteratively distributed among the vertices of a weighted digraph. The model operates in discrete time. The weights of the edges denote their throughputs. The basic model, a standard resource [...] Read more.
A resource network is a non-classical flow model where the infinitely divisible resource is iteratively distributed among the vertices of a weighted digraph. The model operates in discrete time. The weights of the edges denote their throughputs. The basic model, a standard resource network, has one general characteristic of resource amount—the network threshold value. This value depends on graph topology and weights of edges. This paper briefly outlines the main characteristics of standard resource networks and describes two its modifications. In both non-standard models, the changes concern the rules of receiving the resource by the vertices. The first modification imposes restrictions on the selected vertices’ capacity, preventing them from accumulating resource surpluses. In the second modification, a network with so-called greedy vertices, on the contrary, vertices first accumulate resource themselves and only then begin to give it away. It is noteworthy that completely different changes lead, in general, to the same consequences: the appearance of a second threshold value. At some intervals of resource values in networks, their functioning is described by a homogeneous Markov chain, at others by more complex rules. Transient processes and limit states in networks with different topologies and different operation rules are investigated and described. Full article
(This article belongs to the Special Issue Distributed Computer and Communication Networks)
Show Figures

Graphical abstract

14 pages, 1070 KiB  
Article
Estimation of the Optimal Threshold Policy in a Queue with Heterogeneous Servers Using a Heuristic Solution and Artificial Neural Networks
by Dmitry Efrosinin and Natalia Stepanova
Mathematics 2021, 9(11), 1267; https://doi.org/10.3390/math9111267 - 31 May 2021
Cited by 10 | Viewed by 2145
Abstract
This paper deals with heterogeneous queues where servers differ not only in service rates but also in operating costs. The classical optimisation problem in queueing systems with heterogeneous servers consists in the optimal allocation of customers between the servers with the aim to [...] Read more.
This paper deals with heterogeneous queues where servers differ not only in service rates but also in operating costs. The classical optimisation problem in queueing systems with heterogeneous servers consists in the optimal allocation of customers between the servers with the aim to minimise the long-run average costs of the system per unit of time. As it is known, under some assumptions the optimal allocation policy for this system is of threshold type, i.e., the policy depends on the queue length and the state of faster servers. The optimal thresholds can be calculated using a Markov decision process by implementing the policy-iteration algorithm. This algorithm may have certain limitations on obtaining a result for the entire range of system parameter values. However, the available data sets for evaluated optimal threshold levels and values of system parameters can be used to provide estimations for optimal thresholds through artificial neural networks. The obtained results are accompanied by a simple heuristic solution. Numerical examples illustrate the quality of estimations. Full article
(This article belongs to the Special Issue Distributed Computer and Communication Networks)
Show Figures

Figure 1

26 pages, 983 KiB  
Article
Queuing-Inventory Models with MAP Demands and Random Replenishment Opportunities
by Srinivas R. Chakravarthy and B. Madhu Rao
Mathematics 2021, 9(10), 1092; https://doi.org/10.3390/math9101092 - 12 May 2021
Cited by 8 | Viewed by 1673
Abstract
Combining the study of queuing with inventory is very common and such systems are referred to as queuing-inventory systems in the literature. These systems occur naturally in practice and have been studied extensively in the literature. The inventory systems considered in the literature [...] Read more.
Combining the study of queuing with inventory is very common and such systems are referred to as queuing-inventory systems in the literature. These systems occur naturally in practice and have been studied extensively in the literature. The inventory systems considered in the literature generally include (s,S)-type. However, in this paper we look at opportunistic-type inventory replenishment in which there is an independent point process that is used to model events that are called opportunistic for replenishing inventory. When an opportunity (to replenish) occurs, a probabilistic rule that depends on the inventory level is used to determine whether to avail it or not. Assuming that the customers arrive according to a Markovian arrival process, the demands for inventory occur in batches of varying size, the demands require random service times that are modeled using a continuous-time phase-type distribution, and the point process for the opportunistic replenishment is a Poisson process, we apply matrix-analytic methods to study two of such models. In one of the models, the customers are lost when at arrivals there is no inventory and in the other model, the customers can enter into the system even if the inventory is zero but the server has to be busy at that moment. However, the customers are lost at arrivals when the server is idle with zero inventory or at service completion epochs that leave the inventory to be zero. Illustrative numerical examples are presented, and some possible future work is highlighted. Full article
(This article belongs to the Special Issue Distributed Computer and Communication Networks)
Show Figures

Figure 1

18 pages, 2431 KiB  
Article
Green Energy Efficient Routing with Deep Learning Based Anomaly Detection for Internet of Things (IoT) Communications
by E. Laxmi Lydia, A. Arokiaraj Jovith, A. Francis Saviour Devaraj, Changho Seo and Gyanendra Prasad Joshi
Mathematics 2021, 9(5), 500; https://doi.org/10.3390/math9050500 - 01 Mar 2021
Cited by 18 | Viewed by 2442
Abstract
Presently, a green Internet of Things (IoT) based energy aware network plays a significant part in the sensing technology. The development of IoT has a major impact on several application areas such as healthcare, smart city, transportation, etc. The exponential rise in the [...] Read more.
Presently, a green Internet of Things (IoT) based energy aware network plays a significant part in the sensing technology. The development of IoT has a major impact on several application areas such as healthcare, smart city, transportation, etc. The exponential rise in the sensor nodes might result in enhanced energy dissipation. So, the minimization of environmental impact in green media networks is a challenging issue for both researchers and business people. Energy efficiency and security remain crucial in the design of IoT applications. This paper presents a new green energy-efficient routing with DL based anomaly detection (GEER-DLAD) technique for IoT applications. The presented model enables IoT devices to utilize energy effectively in such a way as to increase the network span. The GEER-DLAD technique performs error lossy compression (ELC) technique to lessen the quantity of data communication over the network. In addition, the moth flame swarm optimization (MSO) algorithm is applied for the optimal selection of routes in the network. Besides, DLAD process takes place via the recurrent neural network-long short term memory (RNN-LSTM) model to detect anomalies in the IoT communication networks. A detailed experimental validation process is carried out and the results ensured the betterment of the GEER-DLAD model in terms of energy efficiency and detection performance. Full article
(This article belongs to the Special Issue Distributed Computer and Communication Networks)
Show Figures

Figure 1

30 pages, 426 KiB  
Article
Polling Systems and Their Application to Telecommunication Networks
by Vladimir Vishnevsky and Olga Semenova
Mathematics 2021, 9(2), 117; https://doi.org/10.3390/math9020117 - 07 Jan 2021
Cited by 11 | Viewed by 2254
Abstract
The paper presents a review of papers on stochastic polling systems published in 2007–2020. Due to the applicability of stochastic polling models, the researchers face new and more complicated polling models. Stochastic polling models are effectively used for performance evaluation, design and optimization [...] Read more.
The paper presents a review of papers on stochastic polling systems published in 2007–2020. Due to the applicability of stochastic polling models, the researchers face new and more complicated polling models. Stochastic polling models are effectively used for performance evaluation, design and optimization of telecommunication systems and networks, transport systems and road management systems, traffic, production systems and inventory management systems. In the review, we separately discuss the results for two-queue systems as a special case of polling systems. Then we discuss new and already known methods for polling system analysis including the mean value analysis and its application to systems with heavy load to approximate the performance characteristics. We also present the results concerning the specifics in polling models: a polling order, service disciplines, methods to queue or to group arriving customers, and a feedback in polling systems. The new direction in the polling system models is an investigation of how the customer service order within a queue affects the performance characteristics. The results on polling systems with correlated arrivals (MAP, BMAP, and the group Poisson arrivals simultaneously to all queues) are also considered. We briefly discuss the results on multi-server, non-discrete polling systems and application of polling models in various fields. Full article
(This article belongs to the Special Issue Distributed Computer and Communication Networks)

Review

Jump to: Research

48 pages, 607 KiB  
Review
Graph Metrics for Network Robustness—A Survey
by Milena Oehlers and Benjamin Fabian
Mathematics 2021, 9(8), 895; https://doi.org/10.3390/math9080895 - 17 Apr 2021
Cited by 28 | Viewed by 6289
Abstract
Research on the robustness of networks, and in particular the Internet, has gained critical importance in recent decades because more and more individuals, societies and firms rely on this global network infrastructure for communication, knowledge transfer, business processes and e-commerce. In particular, modeling [...] Read more.
Research on the robustness of networks, and in particular the Internet, has gained critical importance in recent decades because more and more individuals, societies and firms rely on this global network infrastructure for communication, knowledge transfer, business processes and e-commerce. In particular, modeling the structure of the Internet has inspired several novel graph metrics for assessing important topological robustness features of large complex networks. This survey provides a comparative overview of these metrics, presents their strengths and limitations for analyzing the robustness of the Internet topology, and outlines a conceptual tool set in order to facilitate their future adoption by Internet research and practice but also other areas of network science. Full article
(This article belongs to the Special Issue Distributed Computer and Communication Networks)
Back to TopTop