Next Article in Journal
Robust Visual Recognition in Poor Visibility Conditions: A Prior Knowledge-Guided Adversarial Learning Approach
Next Article in Special Issue
Data Rate Selection Strategies for Periodic Transmission of Safety Messages in VANET
Previous Article in Journal
A Multimodal User-Adaptive Recommender System
Previous Article in Special Issue
Packet Reordering in the Era of 6G: Techniques, Challenges, and Applications
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Co-Simulation Platform with Hardware-in-the-Loop Using RTDS and EXata for Smart Grid

1
School of Mechatronical, Engineering Beijing Institute of Technology, Beijing 100081, China
2
Department of Computer Science, City University of Hong Kong, Hong Kong 999077, China
*
Author to whom correspondence should be addressed.
Electronics 2023, 12(17), 3710; https://doi.org/10.3390/electronics12173710
Submission received: 2 August 2023 / Revised: 25 August 2023 / Accepted: 25 August 2023 / Published: 2 September 2023
(This article belongs to the Special Issue Recent Advances in Intelligent Vehicular Networks and Communications)

Abstract

:
The modern smart grid is a vital component of national development and is a complex coupled network composed of power and communication networks. The faults or attacks of either network may cause the performance of a power grid to decline or result in a large-scale power outage, leading to significant economic losses. To assess the impact of grid faults or attacks, hardware-in-the-loop (HIL) simulation tools that integrate real grid networks and software virtual networks (SVNs) are used. However, scheduling faults and modifying model parameters using most existing simulators can be challenging, and traditional HIL interfaces only support a single device. To address these limitations, we designed and implemented a grid co-simulation platform that could dynamically simulate grid faults and evaluate grid sub-nets. This platform used RTDS and EXata as power and communication simulators, respectively, integrated using a protocol conversion module to synchronize and convert protocol formats. Additionally, the platform had a programmable fault configuration interface (PFCI) to modify model parameters and a real sub-net access interface (RSAI) to access physical grid devices or sub-nets in the SVN, improving simulation accuracy. We also conducted several tests to demonstrate the effectiveness of the proposed platform.

1. Introduction

With the advancements in information and communication technology (ICT), modern power systems have evolved into complex coupled network systems comprising power and communication information systems [1]. In a smart grid system, unlike in traditional power networks, a fault or attack on either the power system or the components in the communication system may result in the paralysis of the entire coupled network [2]. To improve the control performance and stability of the coupled system and eliminate potential chain faults, a comprehensive and accurate understanding of the dynamic behavioral characteristics of the coupled system and the mechanisms associated with the occurrence and development of faults in the system is necessary. Thus, it is essential to establish a platform that can deeply analyze complex information/physical composite systems and provide simulation, testing, and verification support for studying theoretical and application problems related to these systems.
Thus, this platform has garnered significant academic attention. In general, evaluation methods for complex coupled systems can be classified into three types: test-bed (hardware), digital co-simulation (software), and hardware-in-the-loop (HIL) co-simulation [3]. Test-bed relies on the physical environment with high accuracy but introduces the overhead problem [4]. Moreover, test-bed is usually limited in scale and cannot be deployed in a separate environment, thereby limiting the reproducibility of the network. On the other hand, digital co-simulation utilizes different simulation engines to simulate grid and communication network behaviors, which is a cost-effective alternative to test-bed a short test cycle, with flexibility in building grid node topologies. However, most of the current co-simulation architectures have simplified time synchronization mechanisms, system structures, component compositions, and natural response characteristics, which reduce the accuracy of simulation results. Furthermore, power systems adopt many dedicated protocols that may not be directly ported to a simulator and are also difficult to evaluate accurately through software-only co-simulation alone. The HIL co-simulation method, nevertheless, combines the advantages of both methods, integrating physical hardware (e.g., novel grid devices) and SVNs to establish complex, real-time embedded systems [5].
However, most HIL simulation methods are mainly oriented to static network simulations, which means the parameters of the software virtual network (SVN) remain unchanged during the simulation, so it cannot meet the following evaluation requirements for a grid. First, to verify the reliability of new grid structures or power devices, they must be tested in dynamic network environments with power outages, link faults, network attacks, etc., which cannot be implemented in a static simulation. Second, when parameters need to be adjusted, the simulation must be reconfigured and rerun multiple times to optimize the network settings, which is time-consuming. In addition, when building a co-simulation platform, one of the critical problems is how the power system and the communication network interact with each other [6,7]. For example, most communication network simulators only support the standard IP packets. They cannot directly interact with a power system’s nonstandard phasor measurement unit (PMU) or stability control service packets.
To solve the mentioned problems, we propose a HIL co-simulation platform that can dynamically load grid faults in the simulation process and integrate physical sub-nets to SVNs via the proposed programmable fault configuration interface (PFCI) and real sub-net access interface (RSAI). First, a well-designed simulator for power and communication networks is urgently required to implement this platform. Compared with developing a new simulation engine from scratch, using existing high-performance simulation software is more accessible and efficient, as well as less costly [8]. Based on real-time simulation characteristics, we choose EXata and a real-time digital simulation system (RTDS) to form a co-simulation platform and implement the data exchange between simulators using a protocol conversion module (PCM). Next, the PFCI implements programmable fault configuration and model parameter modification using the fault configuration and GET/SET packets. Moreover, the RSAI achieves one-to-one mapping between real sub-net routers and virtual network nodes to improve simulation realism and increase the scale.
In conclusion, the main contributions of this paper are as follows.
(1)
An HIL co-simulation platform with EXata and RTDS is proposed to perform a large-scale, highly realistic simulation of grid scenarios.
(2)
A PFCI is designed and implemented to modify network parameters or load real-time fault events during simulations. Meanwhile, an online fault configuration module (OFCM) is developed to provide efficient and convenient management operations.
(3)
An RSAI is developed to realize the platform’s physical device and sub-net access function. In addition, the RSAI can scale up the grid to be tested.
(4)
Several tests are conducted on the platform, and the results demonstrate that the proposed co-simulation platform is effective.

2. Related Works

Power and ICT systems have different specialized simulation tools as two separate systems. The dynamic behavior of a power system is continuous in time and can be represented by a set of differential algebraic equations [9]. Usually, these equations can only be solved numerically, so power system simulation tools use discrete time steps to estimate a system’s current state approximately. Nevertheless, an information and communication system is discrete. It can be modeled directly with the discrete event-based simulation (DES) tool [10]. This tool utilizes a discrete state model to describe a network under discrete parameters (e.g., data queue length) and discrete events (e.g., packet transmission), thus translating complex communication processes into concrete event queues.
Based on the above features, various power and communication network simulators are available. Currently, the commonly used power system simulators include (1) BPA, PSASP, PSS/E, and SYMPOW, which are mainly employed for steady-state and electromechanical transient simulations; (2) EMTP/ATP and PSCAD/EMTDC, which focus on electromagnetic transient analysis; and (3) DIgSILENT, OPAL-RT, and RTDS, which are integrated power system simulators. RTDS [11], which can accurately simulate AC and DC power systems, is a real-time digital simulator for electromagnetic transient power systems. Meanwhile, the popular communication network simulators include (1) open-source simulators, such as NS-2, NS-3, and OMNet++, and (2) commercial simulators, such as OPNET and EXata. EXata [12], as the upgraded version of QualNet, is designed for novel wireless communication technologies and supports real-time simulation. Therefore, based on the real-time simulation performances of RTDS and EXata, they are suitable for building an HIL co-simulation platform to evaluate smart grids.
In recent years, to combine both types of simulation approaches to investigate the characteristics of smart grids, researchers have proposed the following three types of solutions [13].

2.1. Test-Bed

Test-bed, with the highest authenticity compared to other methods, provides a hardware test environment for novel grid technologies before deployment in the field. The authors of [14] give a comprehensive survey of various test-beds built around the world, including SmartGridLab [15], JEJU testbed [16], VAST [17], etc. In [18], a test-bed is proposed to monitor an IEEE 14-bus system simulated using an RTDS, which consisted of a GPS clock, PMUs, an RTDS, and a phasor data concentrator (PDC). Similarly, another test-bed developed for an electric power distribution system was presented in [19], which was adapted for research and education in labs. Most test-beds mentioned above, however, are set up either in a lab environment or in isolation, which limits the scale and reproducibility of grid tests [14].

2.2. Digital Co-Simulation

Digital co-simulation performed in a purely digital environment is an effective approach to reducing testing costs and increasing scenarios’ scalability. This technique is typically categorized into non-real-time simulation and real-time simulation. In a non-real-time simulation, the event execution time generally exceeds the set time step. In contrast, in real-time simulation, the event execution time is less than or equal to the time step [20].

2.2.1. Non-Real-Time Simulation

Initially, Mesut Baran et al. [21] proposed a co-simulation scheme using PSCAD/EMTDC and a communication module written in Java and proposed the idea of the co-simulation of time-continuous and event-triggered systems. After that, EPOCHS [22] is considered to be the first co-simulation platform based on implementing multidisciplinary simulation tools. It used a high-level architecture (HLA) module to support the joint operation of multiple simulators. It adopted three independent simulation tools: PSCAD/EMTDC for electromagnetic transient process simulation of power systems, PSLF simulation software for electromechanical transient process simulation, and NS2 for communication network modeling and simulation. Meanwhile, the runtime infrastructure (RTI) was designed as the interface between the independent simulators to coordinate each simulator’s simulation time and data transmission. Recently, smart grids have integrated many distributed energy resources (DERs) due to their low environmental impact and improved energy efficiency [23], and the simulation scale is increasing consequently. A co-simulation framework was presented in [24] to enable large-scale transmission and distribution simulation. The major modules of this framework were as follows: PSS/E as a transmission solver, GridLAB-D as a distribution solver, and a hierarchical engine for large-scale infrastructure co-simulation (HELICS) as the interface to coordinate time and exchange variables. Another co-simulation platform for evaluating cybersecurity and control applications was introduced in [25]. This platform integrated OpenDSS and NS3 with Mosaik [26], which reused and combined existing simulation models and simulators to create large-scale smart grid scenarios. However, when used for power system dynamic problem simulation (e.g., stability control or wide-area monitoring), simulation time is challenging to synchronize precisely and respond to power or communication events timely, thus affecting the accuracy of simulation results [27]. Moreover, when a simulation server requires more computing capacity or the model time determinism is unsuitable, the simulation runtime may exceed the real time significantly.

2.2.2. Real-Time Simulation

In a real-time simulation platform, simulator engines synchronize the simulation clock and real-time clock to monitor and evaluate novel control and protection devices, demanding more computing capability than non-real-time simulation. Mikel Armendariz et al. [28] proposed a real-time co-simulation platform, which consisted of four parts: a power system real-time simulation unit, a communication network real-time simulation unit, a system-monitoring center, and network connection equipment. This platform could achieve high-precision electromagnetic transient simulation up to 900 power nodes and wide-area power-system-monitoring simulation up to 240,000 power nodes. The authors in [29] employed OPAL-RT and MATLAB to construct a real-time co-simulation platform to evaluate DER coordination schemes. In addition, ref. [30] combined RTDS and OPAL-RT, which run in the electromechanical/root mean square (RMS) and electromagnetic transient (EMT) domains, respectively, to demonstrate the feasibility of a real-time co-simulation of RMS and EMT power system models. They also developed an optical fiber interface with an Aurora protocol to exchange data and compensate for latency. A novel distributed simulator utilizing GridLAB-D and CORE was designed in [8], which scaled up the simulation using lightweight virtualization technology supported using a Linux kernel and evaluated the performance of scheduling algorithms in smart grids.

2.3. HIL Co-Simulation

However, digital co-simulation studies and implementations are difficult to accurately simulate the performance of secondary power and communication devices and unique and dedicated communication protocols used in power systems. One approach to achieve a more realistic simulation system is by connecting real devices to the computer simulation loop, forming a HIL co-simulation system. Tong et al. [31] utilized RTDS and QualNet to simulate and verify the impact of communication bit error rate on the power system, achieving a simulation method with synchronous digital hierarchy (SDH) physical device access. Another co-simulation system was proposed in ref. [32] to analyze the impact of network attacks on power grids, which integrated RTDS, DETERLab (network security simulator), NS3 network simulator, and PMU devices. An architecture for co-simulation proposed in ref. [33] involved two software packages, i.e., OMNeT++ for the ICT system and DIgSILENT for the power system. A MATLAB GUI was designed to input smart grid data. Next, with the proposed platform, this paper also evaluated the feasibility of long term evolution (LTE) as a communication medium for fault management and network reconfiguration. However, dynamically and accurately configuring power grid faults and network attacks, as well as supporting real grid subnet access, remains challenging for these simulation frameworks.
Thus, we propose a co-simulation platform to realize the large-scale grid dynamic evaluation through the designed PFCI and RSAI to address the aforementioned issues. With the PFCI, operators can accurately schedule multiple grid faults or programmatically obtain and modify model parameters during the simulation process, significantly improving simulation accuracy and reducing the rerun times of simulation scripts. The RSAI allows for the connection of physical grid equipment or a subnet to the simulation platform, enabling HIL simulation and providing a solution for scaling up the simulation.

3. Architecture of the Platform

The architecture of the co-simulation platform is shown in Figure 1. It consists of a communication simulation module (CSM), a power simulation module (PSM), an online fault configuration module (OFCM), two types of external interfaces, and a protocol conversion module (PCM). This platform connects real grid devices (such as SDH devices) and sub-nets with a CSM’s SVNs. It performs tasks like dynamically loading grid faults and modifying network parameters according to test purposes. Each module is described as follow.
CSM: EXata is the core component of the CSM used to simulate the communication network behavior of a power grid. EXata is a network simulation software that enables the human-in-the-loop function, which provides a powerful guarantee for the simulation and evaluation of a communication network. It can be used for design, testing, and training in multiple areas. In addition, the simulator owns many high-precision standards-based implementations of protocol models, including sophisticated models for a wireless environment, mobility, weather, etc. It offers the ability to develop custom capability protocols and interfaces in response to demand, making it flexible for building various communication network simulation scenarios. Furthermore, EXata, which can run on a cluster, multicore, or multiprocessor system, can simulate thousands of nodes with high fidelity. As a result, it facilitates the simulation of large-scale power grid scenarios.
PSM: RTDS, an electromagnetic transient simulator, is adopted to simulate models of power stability control protection and generate power business traffic. The simulation step of RTDS ranges from 50 to 100 μs, and the frequency response resolution is 3000 Hz.
OFCM: This module tests the impact of communication network faults on the power network and obtains and sets model parameters. It can dynamically load a variety of communication network faults through the developed GUI and PFCI, including node faults, link faults, DoS attacks, and data tampering.
External interface: Two types of external interfaces, i.e., PFCI and RSAI, are applied in this architecture. The PFCI realizes the data interaction between the OFCM and CSM. Via a UDP socket, it implements parsing configuration packets sent by the CSM into pertinent, recognizable messages by EXata. Then, the PFCI pre-caches them into EXata event queues depending on the execution time. Unlike the PFCI, the RSAI enables one-to-one mapping of physical routers or devices to virtual nodes in EXata to realize data interaction. The working principles of both interfaces are described in Section 4.
PCM: This module realizes data packet format conversion and synchronization to achieve the communication interaction between the PSM and CSM. Unlike the Ethernet data of communication networks, the data frames transmitted by the stability device must comply with strict requirements of time slot synchronization. Moreover, the stability control device’s channel data output is a private communication protocol that uses line-spread spectrum coding to be defined following the applicable communication regulation. Thus, compatibility with other commercial network devices is challenging. To address this issue, we design a protocol converter based on the existing communication equipment of the stability control system. This converter includes a spread spectrum-coding and -decoding device (SCS-500TX communication interface device) and an IP packet encapsulation and transmission device (SSP-592 fiber/Ethernet conversion device). It is shown in Figure 2.
The protocol converter is the intermediate module that connects the physical devices to the CSM, and it is mainly used for the interconversion of E1 and Ethernet. The workflow is as follows. The communication interface device uses a device clock to compare the time with the input stream whenever data are transmitted from the PSM to the CSM to synchronize the time slot. After that, the hardware decoder decodes the spread spectrum code to recover high-level data link control (HDLC) data frames, from which the valid data are read out and error-checked with a high-speed CPU chip. Next, the SSP-592 fiber/Ethernet conversion device re-encapsulates these valid data into IP packets that are output in socket UDP mode. This module performs a reverse process when the communication network has data to send to the grid.

4. External Interfaces of the Platform

4.1. Programmable Fault Configuration Interface

As mentioned above, the PFCI is designed to exchange data between the OFCM and CSM. On the one hand, it parses the fault configuration packets from the OFCM and pre-caches the related events. On the other hand, it obtains EXata model parameters through the GET packet and sends them to the OFCM for display or receives parameters from the OFCM through the SET packet to modify model parameters.

4.1.1. Fault Configuration

We defined four types of fault packet formats, as shown in Figure 3, corresponding to node faults, link faults, DoS attacks, and data-tampering attacks, respectively.
Standard fields in these formats include addr, type, len, nodeId, startTime, and endTime, where addr stores the IP address of the CSM; type denotes packet type; length means packet length; nodeId (1 or 2) indicates the node on which a fault or attack occurs; and startTime and endTime indicate the fault execution and termination times, respectively. Furthermore, delayTime in the node and link fault packets determines the action delay of the fault, which is used to mimic the switching delay in real situations. While in the DoS attack and data-tampering formats, itemNum and itemSize mean the total number and size of packets the attack sends, respectively.
In the initialization phase, a command is added to the EXata configuration file to activate the PFCI. Then, the EXata kernel initializes a UDP socket for receiving data from the OFCM and assigns an idle layer of EXata to the PFCI to handle events.
After initialization, the PFCI workflow is shown in Figure 4. First, via the OFCM, the operator can pre-program a series of the mentioned faults by manually entering or reading a file. Once the fault packet is received, the PFCI stores it in the receive buffer. Next, the event processor of EXata calls the packet parsing function to interpret this packet. If the packet fault parameter is incorrect, the parser returns the corresponding error packet to the send buffer. The CSM sends this error packet to the OFCM to prompt the operator for the command error. Otherwise, the PFCI converts the fault packet into a message and pre-caches it in the event queue based on the time label, namely startTime, in the packet. Then, when the simulation time equals the value of startTime, the corresponding event is assigned to the processor by the event scheduler. At last, the processor triggers one of the following events according to the type of event.
  • Node fault: Shut down all ports of the node represented by nodeId;
  • Link fault: Shut down the ports between nodeId1 and nodeId2;
  • DoS attack: Send large virtual packets to the target node to delay the target system service response or even reject the action;
  • Data-tampering attack: Tamper with data or re-transmit captured standard packets to hinder the reliability and accuracy of data exchange in the grid.

4.1.2. Get and Set Packets

In addition to fault configuration, the PFCI can also acquire and modify model parameters via the GET and SET packets. The primary function of a GET packet is to obtain parameters from the CSM, such as node operating status, communication parameters, fault parameters, etc., and display them on the GUI of the OFCM for operator reference. Its workflow is as follows:
First, when an operator needs to view a node’s parameters, the OFCM generates a GET packet and sends it to the CSM through the PFCI. After that, the CSM calls the PFCI_ProcessEvent() function to judge the packet type and calls the corresponding function to handle this packet according to the judgment result. Next, the CSM sends an ACK message of the GET packet to the OFCM to indicate successful receipt, and the ACK message contains the corresponding parameters requested by the OFCM. Finally, the OFCM receives the ACK packet and obtains the required parameters successfully. The interactive flow of the GET packet is depicted in Figure 5a. The primary function of the SET packet is to modify the model’s parameters in the CSM, which has a similar workflow to the GET packet, as shown in Figure 5b.

4.2. Real Sub-Net Access Interface

The main feature of the RSAI is to access the sub-nets of physical grid devices (or a single device), such as DC sub-stations, load sub-stations, and other physical devices, to the SVN to realize data interaction between the sub-net and the CSM, thus improving the simulation accuracy. The RSAI’s framework is demonstrated in Figure 6. It implements a one-to-one mapping of a single router (device) to a virtual route (node) in the SVN, e.g., A to A’ and B to B’. The RSAI consists of libpcap, libnet, a real–virtual gateway, and a real–virtual packet converter, which are described below.

4.2.1. Workflow of the RSAI

Figure 7 shows the workflow of the RSAI by illustrating the data interaction between real sub-net A and B. Here, we assume that the PCM has converted all non-standard IP packets to standard IP packets. Each accessed sub-net is linked to the CSM’s network interface card (A is linked to eth0, and B is linked to eth1).
For example, once a packet in physical sub-net C is sent to D, it is transmitted along the solid red arrows. First, this packet is routed to eth0 and captured by the libpcap, a function library of network packet capture for Linux systems. Each pair of mappings (e.g., C to C’) constructs a unique real–virtual gateway and maintains a real–virtual routing table before the simulation start. The libpcap buffer stores the captured packet. To obtain the corresponding virtual gateway, the real–virtual gateway mapper compares the packet’s IP addresses (source IP and destination IP) with the real–virtual route table. After that, the packet converter re-encapsulates a new virtual packet, which can be injected into the SVN and contains the captured IP packet as its payload, with the obtained IP address in its header. If the destination (D’) is reachable for the virtual packet through the SVN, it is handed over to the converter again, extracting the real IP packet from its payload. Finally, via gateway 2 and libnet, an interface library that provides network packet construction, processing, and sending functions, the real IP packet is constructed and sent to eth1.

4.2.2. Support for Large-Scale Grid Simulation

Typically, a single EXata server can support scenarios with up to thousands of nodes. To emulate larger grid scenarios, an RSAI can connect multiple separate SVNs running on different servers to scale up the scenario. Specifically, as shown in Figure 8, a physical RSAI router connects the two CSMs, two virtual routers (A’ and A’’) of which are mapped to the RSAI router through the RSAI. In this way, different SVNs running on both servers can interact with each other through the RSAI router to simulate a more extensive power network.

5. Tests and Results

To test the accuracy of the designed interface and the OFCM, we chose EXata version 5.1 as the communication simulator and developed the OFCM GUI using Qt 4.8.4. The device parameters for the OFCM client (laptop) and the EXata server are shown in Table 1.
Figure 9 shows the EXata test network scenario, where nodes were wired and configured for CBR data service from node 1 to node 14. Two routing lines were configured via static routes. The green and red arrows mark the primary and backup routes, respectively, and the data flow was transmitted along the primary route by default.
We set the simulation time and CBR (10 packets/s) duration to 4 min. Test 1 did not configure the fault event, and test 2 pre-set the following events through the OFCM: at node 4, a node fault occurred at 60 s →the node fault recovered at 90 s →at nodes 4 and 5, a link fault occurred at 150 s →the link fault recovered at 180 s →simulation ended at 240 s. The cumulative numbers of received packets of node 5 and node 11 in the two tests were recorded separately, as shown in Figure 10.
No fault events occurred in test 1, so the received packets at node 5 increased linearly. While in test 2, at node 4 a node fault occurred at 60 s, shutting down all ports of node 4. Thus, the number of received packets at nodes 4 and 5 no longer rose. At the same time, the grid switched to the backup route so that the number of packets received by node 11 grew linearly. At 90 s, node 4 recovered, and data reception returned to normal. At 150 s, the link between nodes 4 and 5 failed, and node 5 was again unable to receive packets. This shows that the simulated network behavior was consistent with the PFCI pre-assigned fault events, and the events were executed precisely at the scheduled time.
We also compared the single-trip delay between the RTUI proposed in [34] and the PFCI. This delay refers to the duration or a control packet generation by the OFCM to the reception by the CSM finishing processing the message. The results of one hundred tests are shown in Figure 11. The average single-trip delay of the RTUI was about 95 ms, while this delay of the PFCI was always zero because it used pre-cache technology to schedule the fault events into the event queue before the simulation started, which significantly improved the accuracy of the simulation. In addition, the RTUI introduced human operation delays (like entering configuration parameters), which could dramatically affect the simulation accuracy or even make the simulation impossible when testing fault-event-intensive scenarios.

5.1. Test of the RSAI

The RSAI test scenario is shown in Figure 12. In this EXata scenario, nodes 2, 3, and 4 were 200 m apart. Node 1 moved along the red flag from left to right, and the movement between adjacent flags took 30 s; the total simulation duration was 180 s. Nodes 1–3 were in wireless network 190.0.1.0, while nodes 1, 3, and 4 were in 10.0.1.0. The node parameters are shown in Table 2. Users 1 and 2 (the same laptops as the OFCM clients in Table 1) were wired to the corresponding routers, which were then wired to server 3. The RSAI mapped nodes 1 and 2 in the scenario to routers 1 and 2, respectively.
A video stream was generated through the VLC media player running on user 1 that was destined for user 2. The specific data flow direction was user 1 → router 1 → virtual network of the EXata server → router 2 → user 2. With the movement of node 1, the data flow direction was divided into three stages, namely 1→2, 1→3→2, and 1→4→3→2. Addressing with AODV caused packet loss and increased transmission latency during the communication link switching.
After the simulation started, we employ ifstat to count the sending rate of user 1 and the receiving rate of user 2 at one-second intervals, as shown in Figure 13.
From the figure, the receiving rate was consistent with the sending rate most of the time, but the receiving rate decayed significantly around 85 s and 135 s because path switching occurred twice, resulting in distinct packet loss. Therefore, the real network behavior matched the virtual simulation scenario, which verified the effectiveness of the RSAI.
The delay of the interface was tested by ping, and the test scenario was unchanged. Two groups of tests were set up, in which user 1 sent one ICMP packet to user 2 per second. In group 1, the packet arrived at the simulation server and was forwarded directly without going through Exata, while in group 2, the packet arrived at the server and entered the EXata virtual network through the RSAI. We conducted one hundred tests for each group. The average round-trip time (RTT) was 0.645 ms in group 1 and 5.314 ms in group 2. The results are shown in Figure 14.
The RTT of test 2 included the transmission delay of the EXata virtual network in addition to the delay introduced by the RSAI. The average transmission delay was measured to be about 1.916 ms by adding a time stamp in the packet. So, it is easy to conclude that the average delay introduced by the RSAI was 0.4185 ms, and this delay was acceptable in the simulation test.

5.2. Co-Simulation Test

A dual-network coupling test scenario was constructed in the RTDS and EXata, and the network’s topology is shown in Figure 15a. The communication of sub-stations 7, 8, and 10 with master station 1 was emulated using the proposed co-simulation platform, as shown in Figure 15b. When DC bipolar blocking occurred at the upstream node, we focused on the actions of DC control stations 1, 7, 8, and 10. Three groups of comparison experiments were set up: the first group was not loaded with a DoS attack to observe the action delay of each station when DC bipolar blocking occurred; the second group applied a small-flow DoS attack (50 Mb/s) to node 3; and the third group applied a large equal-flow DoS attack (70 Mb/s). The test results are shown in Figure 16.
Comparing Figure 16a,b, the small-traffic DoS attack increased the action delay of sub-stations 7, 8, and 10 by 10–15 ms, indicating that a small DoS attack caused delayed action of the device. Meanwhile, (c) shows that the addition of a 70 MB/s DoS attack caused sub-stations 7, 8, and 10 to reject the action, indicating that a large-traffic DoS attack may interrupt the communication between the security control devices. The test results were consistent with the expected behavior. Therefore, the proposed co-simulation platform could analyze impacts on dynamic environments, such as power faults on system protection and power systems.

6. Conclusions

This paper proposed a co-simulation platform that could provide a test environment for dynamic grid faults and novel network protocols and devices. Based on the designed PFCI, we could accurately pre-program network fault events and modify model parameters dynamically during simulation, improving the simulation accuracy. Meanwhile, the proposed RSAI and PCM could seamlessly connect physical grid devices or sub-nets to the virtual network to realize data interaction and provide a method to scale up the simulation. Finally, the test results showed that the platform and interface could correctly and efficiently evaluate and verify the impact of communication network faults, providing a reference basis for deploying novel devices and protocols in a grid.

Author Contributions

Conceptualization, P.G. and H.Y.; methodology, P.G.; software, P.G. and H.Y.; validation, H.W., Y.L. and H.L.; formal analysis, P.G.; investigation, Z.Q. and W.W.; resources, D.W.; data curation, X.G.; writing—original draft preparation, P.G., H.Y., H.W. and H.L.; writing—review and editing, P.G. and H.Y.; visualization, P.G. and H.Y.; supervision, P.G.; project administration, P.G. and H.Y.; funding acquisition, P.G. and X.G. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the National Natural Science Foundation of China under grant numbers 62073039 and 62203048. This work was supported in part by the China NSF under grant number 61671062 and by the China Scholarship Council.

Data Availability Statement

Due to the nature of this study, the participants did not agree to their data being shared publicly; therefore, supporting data are unavailable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Mets, K.; Ojea, J.A.; Develder, C. Combining power and communication network simulation for cost-effective smart grid analysis. IEEE Commun. Surv. Tutor. 2014, 16, 1771–1796. [Google Scholar] [CrossRef]
  2. Korkali, M.; Veneman, J.G.; Tivnan, B.F.; Bagrow, J.P.; Hines, P.D.H. Reducing Cascading Failure Risk by Increasing Infrastructure Network Interdependence. Sci. Rep. 2017, 7, 44499. [Google Scholar] [CrossRef] [PubMed]
  3. Sharif, M.; Sadeghi-Niaraki, A. Ubiquitous sensor network simulation and emulation environments: A survey. J. Netw. Comput. Appl. 2017, 93, 150–181. [Google Scholar] [CrossRef]
  4. Ruano, Ó.; García-Herrero, F.; Aranda, L.A.; Sánchez-Macián, A.; Rodriguez, L.; Maestro, J.A. Fault Injection Emulation for Systems in FPGAs: Tools, Techniques and Methodology, a Tutorial. Sensors 2021, 21, 1392. [Google Scholar] [CrossRef] [PubMed]
  5. Gao, W.; Nguyen, J.H.; Yu, W.; Lu, C.; Ku, D.T.; Hatcher, W.G. Toward Emulation-Based Performance Assessment of Constrained Application Protocol in Dynamic Networks. IEEE Internet Things J. 2017, 4, 1597–1610. [Google Scholar] [CrossRef]
  6. Li, W.; Zhang, X.; Li, H. Co-simulation platforms for co-design of networked control systems: An overview. Control Eng. Pract. 2014, 23, 44–56. [Google Scholar] [CrossRef]
  7. Suhaimy, N.; Radzi, N.A.M.; Ahmad, W.S.H.M.W.; Azmi, K.H.M.; Hannan, M.A. Current and Future Communication Solutions for Smart Grids: A Review. IEEE Access 2022, 10, 43639–43668. [Google Scholar] [CrossRef]
  8. Li, X.; Huang, Q.; Wu, D. Distributed Large-scale Co-Simulation for IoT-aided Smart Grid Control. IEEE Access 2017, 5, 19951–19960. [Google Scholar] [CrossRef]
  9. Milano, F.; Zarate-Minano, R. A Systematic Method to Model Power Systems as Stochastic Differential Algebraic Equations. IEEE Trans. Power Syst. 2013, 28, 4537–45441. [Google Scholar] [CrossRef]
  10. Zeigler, B.P.; Kim, T.G.; Praehofer, H. Theory of Modeling and Simulation, 3rd ed.; Academic Press: San Diego, CA, USA, 2019; pp. 339–372. [Google Scholar]
  11. Yu, Z.; Chang, D.; Wang, X.; Ren, Z.; Du, J.; Li, X.; Shu, H. Development and application of a secure and stable remote testing system based on RTDS. In Proceedings of the 2022 IEEE International Conference on Artificial Intelligence and Computer Applications (ICAICA), Dalian, China, 24–26 June 2022. [Google Scholar]
  12. EXata Web Site. Available online: https://www.keysight.com/us/en/product/SN100EXBA/exata-network-modeling.html (accessed on 13 January 2023).
  13. Le, T.D.; Anwar, A.; Beuran, R.; Loke, S.W. Smart grid co-simulation tools: Review and cybersecurity case study. In Proceedings of the 2019 7th International Conference on Smart Grid (icSmartGrid), Newcastle, NSW, Australia, 9–11 December 2019. [Google Scholar]
  14. Tushar, W.; Yuen, C.; Chai, B.; Huang, S. Smart Grid Testbed for Demand Focused Energy Management in End User Environments. IEEE Wirel. Commun. 2016, 23, 70–80. [Google Scholar] [CrossRef]
  15. Lu, G.; De, D.; Song, W. SmartGridLab: A Laboratory-Based Smart Grid Testbed. In Proceedings of the 2010 First IEEE International Conference on Smart Grid Communications, Gaithersburg, MD, USA, 4–6 October 2010. [Google Scholar]
  16. Joo, J.; Kim, L. Strategic guidelines for the diffusion of smart grid technologies through a Korean testbed. Inf. Technol. Dev. 2016, 22, 503–524. [Google Scholar] [CrossRef]
  17. Ainsworth, N.; Costley, M.; Thomas, J.J.; Jezierny, M.; Grijalva, S. Versatile Autonomous Smartgrid Testbed (VAST): A flexible, reconfigurable testbed for research on autonomous control for critical electricity grids. In Proceedings of the 2012 North American Power Symposium (NAPS), Champaign, IL, USA, 9–11 September 2012. [Google Scholar]
  18. Darmis, O.; Korres, G.N. RTDS-supported software-in-the-loop test bed for synchrophasor applications. In Proceedings of the 2022 2nd International Conference on Energy Transition in the Mediterranean Area (SyNERGY MED), Thessaloniki, Greece, 17–19 October 2022. [Google Scholar]
  19. Tunaboylu, N.S.; Shehu, G.; Argin, M.; Yalcinoz, T. Development of smart grid test-bed for electric power distribution system. In Proceedings of the 2016 IEEE Conference on Technologies for Sustainability (SusTech), Phoenix, AZ, USA, 9–11 October 2016. [Google Scholar]
  20. Faruque, M.D.O.; Strasser, T. Real-time simulation technologies for power systems design, testing, and analysis. IEEE Power Energy Technol. Syst. J. 2015, 2, 63–73. [Google Scholar] [CrossRef]
  21. Baran, M.; Sreenath, R.; Mahajan, N.R. Extending EMTDC/PSCAD for simulating agent-based distributed applications. IEEE Power Eng. Rev. 2002, 22, 52–54. [Google Scholar] [CrossRef]
  22. Hopkinson, K.; Wang, X.; Giovanini, R.; Thorp, J.; Birman, K.; Coury, D. EPOCHS: A platform for agent-based electric power and communication simulation built from commercial off-the-shelf components. IEEE Trans. Power Syst. 2006, 21, 548–558. [Google Scholar] [CrossRef]
  23. Nosratabadi, S.M.; Hooshmand, R.-A.; Gholipour, E. A comprehensive review on microgrid and virtual power plant concepts employed for distributed energy resources scheduling in power systems. Renew. Sustain. Energy Rev. 2017, 67, 341–363. [Google Scholar] [CrossRef]
  24. Bharati, A.K.; Ajjarapu, V. SMTD co-simulation framework with helics for future-grid analysis and synthetic measurement-data generation. IEEE Trans. Ind. Appl. 2022, 58, 131–141. [Google Scholar] [CrossRef]
  25. de Souza, E.; Ardakanian, O.; Nikolaidis, I. A co-simulation platform for evaluating cyber security and control applications in the smart grid. In Proceedings of the ICC 2020–2020 IEEE International Conference on Communications (ICC), Dublin, Ireland, 7–11 June 2020. [Google Scholar]
  26. Rohjans, S.; Lehnhoff, S.; Schutte, S.; Scherfke, S.; Hussain, S. Mosaik—A modular platform for the evaluation of agent-based smart grid control. In Proceedings of the IEEE PES ISGT Europe, Lyngby, Denmark, 6–9 October 2013. [Google Scholar]
  27. Lin, H.; Sambamoorthy, S.; Shukla, S.; Thorp, J.; Mili, L. Power system and communication network co-simulation for smart grid applications. In Proceedings of the ISGT, Anaheim, CA, USA, 17–19 January 2011; pp. 1–6. [Google Scholar]
  28. Armendariz, M.; Chenine, M.; Nordstrom, L.; Al-Hammouri, A. A co-simulation platform for medium/low voltage monitoring and control applications. In Proceedings of the 2014 IEEE PES Innovative Smart Grid Technologies (ISGT), Washington, DC, USA, 19–22 February 2014. [Google Scholar]
  29. Khurram, A.; Amini, M.; Espinosa, L.A.D.; Hines, P.D.H.; Almassalkhi, M.R. Real-time grid and der co-simulation platform for testing large-scale der coordination schemes. IEEE Trans. Smart Grid 2022, 13, 4367–4378. [Google Scholar] [CrossRef]
  30. Scheibe, C.; Kuri, A.; Graf, L.; Venugopal, R.; Mehlmann, G. Real Time Co-Simulation of Electromechanical and Electromagnetic Power System Models. In Proceedings of the 2022 International Conference on Smart Energy Systems and Technologies (SEST), Eindhoven, The Netherlands, 5–7 September 2022. [Google Scholar]
  31. Tong, H.; Ni, M.; Zhao, L.; Li, M. Flexible hardware-in-the-loop testbed for cyber physical power system simulation. IET Cyber-Phys. Syst. Theory Appl. 2019, 4, 374–381. [Google Scholar] [CrossRef]
  32. Liu, R.; Vellaithurai, C.; Biswas, S.S.; Gamage, T.T.; Srivastava, A.K. Aalyzing the cyber-physical impact of cyber events on the power grid. IEEE Trans. Smart Grid 2015, 6, 2444–2453. [Google Scholar] [CrossRef]
  33. Garau, M.; Ghiani, E.; Celli, G.; Pilo, F.; Corti, S. Co-simulation of smart distribution network fault management and reconfiguration with LTE communication. Energies 2018, 11, 1332. [Google Scholar] [CrossRef]
  34. Gong, P.; Li, M.; Kong, J.; Li, P.; Kim, D.K. An interactive approach for QualNet-based network model evaluation and testing at real time. In Proceedings of the 16th International Conference on Advanced Communication Technology, Pyeongchang, Republic of Korea, 16–19 February 2014. [Google Scholar]
Figure 1. The architecture of the platform.
Figure 1. The architecture of the platform.
Electronics 12 03710 g001
Figure 2. Protocol conversion module.
Figure 2. Protocol conversion module.
Electronics 12 03710 g002
Figure 3. Fault packet formats.
Figure 3. Fault packet formats.
Electronics 12 03710 g003
Figure 4. Workflow of the PFCI: fault configuration packet.
Figure 4. Workflow of the PFCI: fault configuration packet.
Electronics 12 03710 g004
Figure 5. Workflow of the PFCI.
Figure 5. Workflow of the PFCI.
Electronics 12 03710 g005
Figure 6. The framework of the RSAI.
Figure 6. The framework of the RSAI.
Electronics 12 03710 g006
Figure 7. The workflow of the RSAI.
Figure 7. The workflow of the RSAI.
Electronics 12 03710 g007
Figure 8. Large-scale grid simulation via RSAI.
Figure 8. Large-scale grid simulation via RSAI.
Electronics 12 03710 g008
Figure 9. EXata test network scenario.
Figure 9. EXata test network scenario.
Electronics 12 03710 g009
Figure 10. The cumulative numbers of received packets.
Figure 10. The cumulative numbers of received packets.
Electronics 12 03710 g010
Figure 11. The single-trip delay between the RTUI and the PFCI.
Figure 11. The single-trip delay between the RTUI and the PFCI.
Electronics 12 03710 g011
Figure 12. RSAI test scenario.
Figure 12. RSAI test scenario.
Electronics 12 03710 g012
Figure 13. The VLC send/receive rate statistics.
Figure 13. The VLC send/receive rate statistics.
Electronics 12 03710 g013
Figure 14. RTT of both group tests.
Figure 14. RTT of both group tests.
Electronics 12 03710 g014
Figure 15. Co-simulation test scenario.
Figure 15. Co-simulation test scenario.
Electronics 12 03710 g015
Figure 16. Action delay under different DoS attacks.
Figure 16. Action delay under different DoS attacks.
Electronics 12 03710 g016
Table 1. Device parameters.
Table 1. Device parameters.
NameCPUMemoryDisk SpaceOS
OFCM clientCPU Core i5-64008 GB1 TBUbuntu 12.01
EXata severCPU Inter Xeon E5-2620 V4 x2128 GB1 TBWindows 7
Table 2. Node parameters.
Table 2. Node parameters.
ModelValue
Transmission channelTwo-Ray
Physical802.11b
MAC802.11
IPIPv4
Routing protocolAODV
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Gong, P.; Yang, H.; Wu, H.; Li, H.; Liu, Y.; Qi, Z.; Wang, W.; Wu, D.; Gao, X. Co-Simulation Platform with Hardware-in-the-Loop Using RTDS and EXata for Smart Grid. Electronics 2023, 12, 3710. https://doi.org/10.3390/electronics12173710

AMA Style

Gong P, Yang H, Wu H, Li H, Liu Y, Qi Z, Wang W, Wu D, Gao X. Co-Simulation Platform with Hardware-in-the-Loop Using RTDS and EXata for Smart Grid. Electronics. 2023; 12(17):3710. https://doi.org/10.3390/electronics12173710

Chicago/Turabian Style

Gong, Peng, Haowei Yang, Haiqiao Wu, Huibo Li, Yu Liu, Zhenheng Qi, Weidong Wang, Dapeng Wu, and Xiang Gao. 2023. "Co-Simulation Platform with Hardware-in-the-Loop Using RTDS and EXata for Smart Grid" Electronics 12, no. 17: 3710. https://doi.org/10.3390/electronics12173710

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop